llm-techniques - Concepts
Explore concepts tagged with "llm-techniques"
Total concepts: 18
Concepts
- Prompt Templates - Reusable, parameterized prompt structures that standardize how you ask AI to perform recurring tasks.
- Self-Ask Prompting - A prompting technique where the AI explicitly asks and answers its own sub-questions before producing a final answer.
- Prompt Compression - Shortening prompts while preserving their effectiveness, to reduce latency, cost, and context window usage.
- Prompt Chaining - Breaking complex tasks into a sequence of simpler prompts, where each prompt's output feeds into the next.
- System Prompts - Initial instructions given to an AI that define its behavior, personality, constraints, and capabilities for the entire conversation.
- ReAct Prompting - A prompting framework that combines reasoning traces with action-taking, enabling AI to think and act interleaved.
- Skeleton-of-Thought Prompting - Prompt the model to first sketch a skeleton outline of an answer, then expand each point in parallel.
- Analogical Prompting - A technique that prompts AI to recall or generate relevant examples and analogies before solving a new problem.
- Role Prompting - A technique where you assign a specific persona, expertise, or character to an AI to shape its responses and behavior.
- Generated Knowledge Prompting - A two-step technique where the AI first generates relevant background knowledge, then uses that knowledge to answer the question.
- Tree-of-Thought Prompting - A prompting technique that explores multiple reasoning paths in parallel, like a tree of possibilities, to find the best solution.
- Self-Consistency Prompting - A decoding strategy that samples multiple reasoning paths and selects the most consistent answer through majority voting.
- Chain-of-Thought Prompting - A prompting technique that encourages LLMs to break down complex problems into step-by-step reasoning, improving accuracy and reliability.
- Directional Stimulus Prompting - Guiding an AI toward a desired output by injecting small hints, keywords, or cues into the prompt.
- Meta-Prompting - Using AI to generate, refine, or improve prompts themselves, creating a recursive improvement loop.
- Structured Output Prompting - Techniques for getting AI to produce output in specific, parseable formats like JSON, XML, or markdown tables.
- Least-to-Most Prompting - A technique that decomposes complex problems into simpler subproblems, solving them in order from easiest to hardest.
- Reflexion - An AI technique where the model reflects on its own outputs, identifies errors, and iteratively improves its responses.
← Back to all concepts