Viqus Logo Viqus Logo
Home
Categories
Language Models Generative Imagery Hardware & Chips Business & Funding Ethics & Society Science & Robotics
Resources
AI Glossary Academy CLI Tool Labs
About Contact
Back to Glossary
Generative AI Intermediate Also: CoT Prompting, Step-by-Step Reasoning, Reasoning Chains

Chain-of-Thought Prompting

Definition

A prompting technique that instructs a language model to break down its reasoning into explicit, step-by-step intermediate steps before arriving at a final answer — significantly improving performance on tasks requiring logic, math, and multi-step reasoning.

In Depth

Chain-of-Thought (CoT) prompting is a technique where a language model is instructed to produce intermediate reasoning steps before giving a final answer, rather than jumping directly to a conclusion. A simple example: instead of asking 'What is 17 × 24?' and getting a potentially wrong answer, you prompt 'Solve 17 × 24 step by step' — the model then breaks down the calculation (17 × 20 = 340, 17 × 4 = 68, 340 + 68 = 408), making errors visible and improving accuracy. This was formalized by Wei et al. at Google in 2022.

CoT prompting dramatically improves LLM performance on tasks requiring multi-step reasoning: arithmetic, logical deduction, word problems, coding, planning, and commonsense reasoning. The improvement grows with model size — smaller models show little benefit, while large models (100B+ parameters) show substantial gains, sometimes more than doubling accuracy on math benchmarks. This suggests that large models have latent reasoning capabilities that standard prompting fails to activate. Variants include Zero-Shot CoT (simply adding 'Let's think step by step' to any prompt) and Self-Consistency (generating multiple reasoning chains and selecting the most common answer).

CoT prompting has become a fundamental component of prompt engineering and has inspired deeper architectural innovations. 'Thinking' or 'reasoning' models like OpenAI's o1 and o3 are trained to produce extended chain-of-thought reasoning internally before answering, effectively baking the CoT approach into the model itself. Tree-of-Thought prompting extends CoT by exploring multiple reasoning paths simultaneously. The success of CoT has shifted understanding of LLM capabilities — these models can reason more effectively than standard prompting reveals, but they need to be guided to 'show their work.'

Key Takeaway

Chain-of-Thought prompting unlocks LLM reasoning by eliciting step-by-step thinking — it dramatically improves performance on math, logic, and complex tasks by making the reasoning process explicit.

Real-World Applications

01 Mathematical problem-solving: CoT prompting enables LLMs to correctly solve multi-step arithmetic, algebra, and word problems that they fail at with direct prompting.
02 Logical reasoning: breaking down complex logical arguments step by step helps models identify fallacies, check consistency, and reach correct conclusions.
03 Code debugging: prompting a model to trace through code execution step by step helps it identify where bugs occur and why.
04 Decision-making: asking a model to explicitly list and weigh pros and cons before making a recommendation produces more balanced, well-reasoned advice.
05 Science education: CoT-prompted models can explain physics, chemistry, and biology problems step by step, serving as educational tutors.