ai - Concepts
Explore concepts tagged with "ai"
Total concepts: 37
Concepts
- AI Agent - AI systems that can take actions, use tools, and pursue goals autonomously.
- AI Alignment - Ensuring AI systems behave in accordance with human intentions and values.
- AI Assistants - AI tools configured to help with specific tasks like writing, research, or coding.
- AI Hallucination - When AI models generate plausible-sounding but incorrect or fabricated information.
- AI Master Prompt - A comprehensive system prompt that configures AI to understand your context and work style.
- AI Mega Prompts - A technique of concatenating multiple notes and documents into a single comprehensive file to provide rich context to LLMs.
- AI Safety - Research and practices ensuring AI systems are beneficial and don't cause unintended harm.
- AI Temperature - A parameter controlling the randomness and creativity of AI model outputs.
- Attention Mechanism - An AI technique that allows models to focus on relevant parts of input when producing output.
- Chain-of-Thought Prompting - A prompting technique that encourages LLMs to break down complex problems into step-by-step reasoning, improving accuracy and reliability.
- Chain of Thought - A prompting technique where AI models reason step-by-step rather than jumping to answers.
- Context Engineering - The practice of providing AI with optimal context for better outputs.
- Context Window - The maximum number of tokens an LLM can process in a single interaction, determining how much information it can consider when generating responses.
- Deep Learning - A subset of machine learning using neural networks with multiple layers to learn complex patterns from data.
- Embedding - Converting text, images, or other data into numerical vectors that capture semantic meaning.
- Few-Shot Learning - Training or prompting AI with just a few examples to perform new tasks.
- Fine-Tuning - Customizing pre-trained AI models by training them further on specific data or tasks.
- Generative AI - AI systems that create new content such as text, images, audio, or video.
- Goldilocks Rule for AI - The principle that AI tasks should be neither too easy nor too hard to maintain engagement and optimal learning.
- Human-in-the-Loop - Systems design where humans remain actively involved in AI decision-making processes.
- Jevons Paradox - The principle that increasing the efficiency of resource use tends to increase total consumption rather than decrease it.
- Large Language Models (LLMs) - AI models that use transformer architecture to understand and generate human-like text by predicting the next token in a sequence.
- Model Context Protocol - A standard for connecting AI models with external data sources and tools.
- Moravec's Paradox - The observation that tasks easy for humans (like perception and movement) are hard for AI, while tasks hard for humans (like math and chess) are easy for AI.
- Multimodal AI - AI systems that can process and generate multiple types of content like text, images, and audio.
- Prompt-Driven Development (PDD) - Using AI prompts as the primary interface for software development tasks.
- Prompt Engineering - The practice of crafting effective prompts to get optimal results from AI models.
- Prompt Lazy Loading - An AI design pattern that defers loading detailed prompt instructions until they are actually needed.
- Receptionist AI Design Pattern - An AI architecture pattern using a lightweight coordinator to route requests to specialized AI agents.
- Retrieval Augmented Generation (RAG) - An architecture that enhances LLM outputs by retrieving relevant information from external knowledge sources before generating responses.
- Reinforcement Learning from Human Feedback (RLHF) - A training technique that aligns LLM outputs with human preferences by using human feedback to guide model behavior.
- SCALE Method - A framework for leveraging AI through systematic capture, connection, and growth.
- Supervised Learning - A machine learning approach where models are trained on labeled data with known correct outputs.
- Tokenization - Breaking text into smaller units (tokens) that AI models can process.
- Transformer - The neural network architecture underlying modern AI language models.
- Unsupervised Learning - A machine learning approach where models find patterns in data without labeled examples or predefined outcomes.
- Zero-Shot Learning - AI performing tasks based on instructions alone, without any specific examples.
← Back to all concepts