Context Poisoning
The degradation of AI model performance when irrelevant, misleading, contradictory, or adversarial information is included in the context window.
Also known as: Context Contamination, Context Pollution, Noisy Context
Category: AI
Tags: ai, context-engineering, risks, reliability, prompt-engineering
Explanation
Context Poisoning occurs when the quality of an AI model's output degrades because of problematic content in its context window. Unlike prompt injection (which targets the model's instructions), context poisoning corrupts the informational context the model uses to reason, leading to subtly degraded rather than overtly hijacked outputs.
**Types of Context Poisoning**:
1. **Noise poisoning**: Including irrelevant information that dilutes the model's attention away from what matters
2. **Contradictory poisoning**: Providing conflicting information that confuses the model's reasoning
3. **Stale poisoning**: Including outdated information that the model treats as current
4. **Adversarial poisoning**: Deliberately inserting misleading content to manipulate outputs
5. **Volume poisoning**: Overwhelming the context with so much information that quality degrades (related to the lost-in-the-middle effect)
6. **Bias poisoning**: Including skewed or unrepresentative examples that shift the model's outputs
**How It Happens**:
- **RAG pipelines**: Retrieval systems return irrelevant or outdated documents
- **Conversation history**: Long conversations accumulate noise and contradictions
- **Multi-source context**: Combining information from sources of varying quality
- **User inputs**: Unvalidated user content pollutes the context
- **Tool outputs**: Failed tool calls or noisy tool results contaminate reasoning
**The Lost-in-the-Middle Effect**:
Research has shown that LLMs pay less attention to information in the middle of long contexts, favoring the beginning and end. This means that important context placed in the middle may be effectively lost, while irrelevant content near the edges may receive disproportionate attention.
**Impact on AI Systems**:
- Subtle accuracy degradation that's hard to detect
- Confident but incorrect responses
- Inconsistent output quality depending on context composition
- Compounding errors in multi-step agent workflows
- Reduced effectiveness of carefully crafted system prompts
**Mitigation**:
- Curate context carefully: only include relevant, high-quality information
- Place critical context at the beginning and end of the prompt
- Use context compression or summarization for long conversations
- Validate RAG retrieval results before including them
- Monitor output quality as context grows
- Implement context rotation strategies for long-running agents
- Test with adversarial and noisy contexts
Related Concepts
← Back to all concepts