fundamentals - Concepts
Explore concepts tagged with "fundamentals"
Total concepts: 37
Concepts
- Symmetry Breaking - The process by which a system transitions from a symmetric state to one with less symmetry, giving rise to new structures, forces, and phenomena.
- AI Frontier Model - The most capable and advanced AI models at the cutting edge of performance, typically from leading AI labs.
- Embedding - Converting text, images, or other data into numerical vectors that capture semantic meaning.
- Metis - The ancient Greek concept of cunning intelligence — the practical, adaptive wisdom needed to navigate ambiguity, seize opportunities, and act effectively in uncertain situations.
- Attention Mechanism - An AI technique that allows models to focus on relevant parts of input when producing output.
- Symmetry in Physics - The property that the laws of physics remain unchanged under specific transformations such as translations in space or time, rotations, or reflections.
- Input Randomness - The variability and unpredictability in the inputs provided to an AI system, including prompt phrasing, context composition, and information ordering, which directly influences the quality and consistency of outputs.
- AI Ethics - The field concerned with the moral principles, values, and guidelines that should govern the development and use of artificial intelligence systems.
- Model Parameters - The learned numerical values (weights and biases) within a neural network that determine how the model transforms inputs into outputs.
- Ensemble Learning - A machine learning paradigm that combines predictions from multiple models to produce more accurate and robust results than any single model alone.
- Information Management - The systematic organization, storage, and retrieval of information.
- Autoencoder - A neural network architecture that learns compressed representations by encoding input into a lower-dimensional latent space and then decoding it back to reconstruct the original input.
- Transformer - The neural network architecture underlying modern AI language models.
- Explainable AI - A set of methods and techniques that make AI system outputs understandable and interpretable to humans.
- Conservation Laws - Fundamental physical principles stating that certain measurable quantities in an isolated system remain constant over time regardless of processes occurring within.
- Dimensionality Reduction - A set of techniques for reducing the number of variables in a dataset while preserving its essential structure, making high-dimensional data easier to visualize, process, and analyze.
- Tokenization - Breaking text into smaller units (tokens) that AI models can process.
- Chronos - The ancient Greek concept of sequential, quantitative time — measurable duration as opposed to the qualitative, opportune moment represented by kairos.
- Token - A fundamental unit of text that language models process, typically representing a word, subword, or character.
- Generative Adversarial Network - A machine learning framework where two neural networks compete against each other — a generator creating synthetic data and a discriminator evaluating its authenticity — to produce increasingly realistic outputs.
- AI Foundation Models - Large-scale AI models trained on broad data that serve as the base for various downstream applications.
- Law of Pragnanz - The overarching Gestalt principle stating that the brain tends to perceive and organize visual information in the simplest, most regular, and most orderly form possible.
- Reinforcement Learning - A machine learning paradigm where an agent learns to make decisions by taking actions in an environment and receiving rewards or penalties as feedback.
- Variational Autoencoder - A generative model that learns a structured, continuous latent space by combining autoencoder architecture with probabilistic inference, enabling generation of new data by sampling from the learned distribution.
- AI Scaling Laws - Empirical relationships between model size, training data, compute, and AI performance that guide resource allocation.
- Pre-training - The initial phase of training a language model on large-scale text data to learn general language understanding before task-specific fine-tuning.
- Next-Token Prediction - The core mechanism of autoregressive language models that generates text by predicting the most likely next token given all preceding tokens.
- Training Data - The dataset used to teach a machine learning model patterns and relationships, directly shaping the model's capabilities and limitations.
- AI Tokenization - Process of breaking text into tokens that AI models use as their fundamental units of input and output.
- Inference - The process of drawing conclusions from available evidence, premises, or observations using logical reasoning.
- Elementary Reading - The first and most basic level of reading focused on literacy itself - recognizing words, understanding sentences, and grasping basic meaning.
- Output Randomness - The intentional and unintentional variability in AI-generated outputs arising from sampling parameters, model stochasticity, and the probabilistic nature of next-token prediction.
- Representation Learning - A class of machine learning techniques where models automatically discover the representations needed for a task from raw data, rather than relying on manually engineered features.
- Aion - The ancient Greek concept of eternal, cyclical, or unbounded time — encompassing ages, eras, and the totality of time beyond human measurement.
- Backpropagation - The fundamental algorithm for training neural networks that efficiently computes gradients of the loss function with respect to each weight by propagating errors backward through the network layers.
- Telos - The ancient Greek concept of purpose, ultimate aim, or inherent end toward which something naturally develops or is directed.
- Latent Space - A compressed, multi-dimensional representation space where a model encodes the essential features of its input data.
← Back to all concepts