Context Window - Graph View The maximum number of tokens an LLM can process in a single interaction, determining how much information it can consider when generating responses. View concept details Related ConceptsLarge Language Models (LLMs) Retrieval Augmented Generation (RAG) AI Mega Prompts AI Attention Budget Context Poisoning Attention Mechanism Token Tokenization Autoregressive Model AI Context Management AI Agent Memory Conversational Memory AI Memory Silo Problem ← Back to full graph