Viqus Logo Viqus Logo
Home
Categories
Language Models Generative Imagery Hardware & Chips Business & Funding Ethics & Society Science & Robotics
Resources
AI Glossary Academy CLI Tool Labs
About Contact
Back to Glossary
Generative AI Intermediate Also: In-Context Learning, Zero-Shot Prompting, Few-Shot Prompting

Zero-Shot & Few-Shot Learning

Definition

The ability of AI models to perform tasks they were not explicitly trained on. Zero-shot requires no examples; few-shot uses a small number of examples provided in the prompt to guide the model's behavior.

In Depth

Zero-shot learning is a model's ability to perform a task it was never explicitly trained on, using only a natural language description. For example, a language model that was never specifically trained for sentiment analysis can classify movie reviews as positive or negative simply when prompted: 'Is this review positive or negative?' Few-shot learning extends this by providing a small number of examples in the prompt — showing the model two or three input-output pairs before asking it to handle a new case. This 'teaching by example' within the prompt is also called in-context learning.

These capabilities emerged as a surprising consequence of training large language models on diverse internet text. Because the training data contains examples of virtually every text format and task — reviews, translations, Q&A, code, etc. — large models internalize patterns that transfer across tasks. GPT-3 demonstrated that few-shot learning could achieve competitive results on benchmarks without any gradient updates (fine-tuning), a paradigm shift that made powerful AI accessible through prompting rather than machine learning engineering.

Zero-shot and few-shot learning have transformed how AI applications are built. Instead of collecting thousands of labeled examples and training a custom model for each new task (the traditional ML pipeline), developers can now solve many problems by crafting the right prompt with a few illustrative examples. This dramatically reduces development time and cost. However, performance is generally lower than fine-tuned models for complex or domain-specific tasks, and results are sensitive to the choice and ordering of examples. Understanding when to use few-shot prompting versus fine-tuning is a key practical skill.

Key Takeaway

Zero-shot and few-shot learning allow LLMs to perform new tasks from descriptions or a few examples alone — eliminating the need for task-specific training data in many practical applications.

Real-World Applications

01 Rapid prototyping: testing whether an LLM can solve a new task by writing a prompt with 3-5 examples, before committing to a full training pipeline.
02 Classification without training data: categorizing customer feedback, emails, or support tickets by providing a few labeled examples directly in the prompt.
03 Data extraction: few-shot prompts that show 2-3 examples of extracting structured data from unstructured text (names, dates, amounts from invoices).
04 Translation of rare languages: zero-shot translation between language pairs with limited parallel training data, leveraging the model's multilingual knowledge.
05 Code generation: providing a few input-output examples to teach the model a specific function's behavior without formal specification.