Chain-of-Thought Prompting
A prompting technique that encourages LLMs to break down complex problems into step-by-step reasoning, improving accuracy and reliability.
Also known as: CoT Prompting, Step-by-Step Prompting, Think Step by Step
Category: Techniques
Tags: ai, prompting, reasoning, llm-techniques, problem-solving
Explanation
Chain-of-Thought (CoT) prompting is a technique for improving Large Language Model outputs by explicitly requesting step-by-step reasoning. Instead of asking for a direct answer, you prompt the model to 'think through' the problem, showing its work along the way.
This technique is effective because LLMs are autoregressive - they predict the next token based on all previous tokens. When the model generates intermediate reasoning steps, these become part of the context that influences subsequent predictions. Correct-sounding reasoning leads to more correct-sounding conclusions.
Common CoT trigger phrases include:
- 'Let's think step by step'
- 'Think about this carefully'
- 'Think hard about...'
- 'Ultrathink' (in some systems like Claude Code)
Variations of CoT include:
- **Zero-shot CoT**: Simply adding 'Let's think step by step' to any prompt
- **Few-shot CoT**: Providing examples of step-by-step reasoning before your question
- **Self-consistency**: Generating multiple reasoning paths and selecting the most common answer
CoT is particularly useful for:
- Mathematical problems
- Logical reasoning
- Multi-step planning
- Complex analysis tasks
The technique works because it forces the model to make its implicit reasoning explicit, reducing the chance of skipping critical steps.
Related Concepts
← Back to all concepts