llm-techniques - Concepts
Explore concepts tagged with "llm-techniques"
Total concepts: 13
Concepts
- Analogical Prompting - A technique that prompts AI to recall or generate relevant examples and analogies before solving a new problem.
- Reflexion - An AI technique where the model reflects on its own outputs, identifies errors, and iteratively improves its responses.
- Role Prompting - A technique where you assign a specific persona, expertise, or character to an AI to shape its responses and behavior.
- Least-to-Most Prompting - A technique that decomposes complex problems into simpler subproblems, solving them in order from easiest to hardest.
- Generated Knowledge Prompting - A two-step technique where the AI first generates relevant background knowledge, then uses that knowledge to answer the question.
- Meta-Prompting - Using AI to generate, refine, or improve prompts themselves, creating a recursive improvement loop.
- Chain-of-Thought Prompting - A prompting technique that encourages LLMs to break down complex problems into step-by-step reasoning, improving accuracy and reliability.
- System Prompts - Initial instructions given to an AI that define its behavior, personality, constraints, and capabilities for the entire conversation.
- Tree-of-Thought Prompting - A prompting technique that explores multiple reasoning paths in parallel, like a tree of possibilities, to find the best solution.
- Self-Consistency Prompting - A decoding strategy that samples multiple reasoning paths and selects the most consistent answer through majority voting.
- Structured Output Prompting - Techniques for getting AI to produce output in specific, parseable formats like JSON, XML, or markdown tables.
- ReAct Prompting - A prompting framework that combines reasoning traces with action-taking, enabling AI to think and act interleaved.
- Prompt Chaining - Breaking complex tasks into a sequence of simpler prompts, where each prompt's output feeds into the next.
← Back to all concepts