prompt-engineering - Concepts
Explore concepts tagged with "prompt-engineering"
Total concepts: 7
Concepts
- Prompt Fragility - The tendency for AI prompts to break or produce degraded outputs when small changes occur in input data, phrasing, or model versions.
- Context Poisoning - The degradation of AI model performance when irrelevant, misleading, contradictory, or adversarial information is included in the context window.
- Prompt Adherence - The degree to which a large language model follows the instructions, constraints, and formatting specified in a prompt.
- Prompt Debt - The accumulated cost of unrefined, ad-hoc, or poorly maintained prompts that degrade AI output quality and create hidden inefficiencies over time.
- Steerability - The ability to control and direct an AI model's behavior, tone, style, and output characteristics through instructions and configuration.
- AI Context Management - Strategies and techniques for effectively managing the limited context window of large language models to maximize relevance and response quality.
- Prompt Injection - A security vulnerability where malicious input causes an AI model to ignore its original instructions and follow attacker-supplied directives instead.
← Back to all concepts