training - Concepts
Explore concepts tagged with "training"
Total concepts: 20
Concepts
- Constitutional AI - AI training method using a set of principles (constitution) to guide model behavior and self-improvement.
- Multi-Task Learning - A machine learning approach where a single model is trained on multiple related tasks simultaneously, leveraging shared representations to improve generalization.
- Microteaching - A training technique where educators practice teaching short lessons to small groups, receiving immediate feedback to refine their skills.
- Fine-Tuning - Customizing pre-trained AI models by training them further on specific data or tasks.
- Direct Preference Optimization - A simplified alternative to RLHF that fine-tunes language models directly on human preference data without training a separate reward model.
- Reinforcement Learning from Human Feedback (RLHF) - A training technique that aligns LLM outputs with human preferences by using human feedback to guide model behavior.
- Instruction Tuning - A fine-tuning technique that trains language models to follow natural language instructions by learning from examples of instruction-response pairs.
- Open Training - Practice of making the entire AI model training process transparent and reproducible, including training data, code, hyperparameters, and methodology.
- Reward Model - A neural network trained to predict human preferences, used to provide a scalar reward signal for optimizing language model behavior in RLHF.
- Reinforcement Learning - A machine learning paradigm where an agent learns to make decisions by taking actions in an environment and receiving rewards or penalties as feedback.
- Attention Gym - Regular practices for building and maintaining attentional fitness and focus capacity.
- Pre-training - The initial phase of training a language model on large-scale text data to learn general language understanding before task-specific fine-tuning.
- Train the Trainer - A methodology for developing skilled trainers by teaching them both subject matter expertise and instructional delivery techniques.
- Training Data - The dataset used to teach a machine learning model patterns and relationships, directly shaping the model's capabilities and limitations.
- Model Collapse - The degradation of AI model quality when trained on synthetic data generated by other AI models, causing progressive loss of diversity and accuracy.
- Knowledge Distillation - A model compression technique where a smaller student model is trained to reproduce the behavior and outputs of a larger, more capable teacher model.
- Stress Inoculation - Controlled exposure to manageable stress to build tolerance and coping skills for future challenges.
- Attention Training - Practices designed to improve attentional capacity, control, and flexibility.
- Federated Learning - A distributed machine learning approach where models are trained across multiple decentralized devices or servers holding local data, without exchanging raw data.
- Learning Pyramid - A model illustrating that retention rates vary dramatically based on the learning method, with active methods producing far better results than passive ones.
← Back to all concepts