risks - Concepts
Explore concepts tagged with "risks"
Total concepts: 36
Concepts
- Past Performance Fallacy - The principle that historical results and past successes do not guarantee or reliably predict future outcomes.
- Prompt Fragility - The tendency for AI prompts to break or produce degraded outputs when small changes occur in input data, phrasing, or model versions.
- Epistemic Uncertainty - The uncertainty arising from lack of knowledge or information, rather than from inherent randomness or variability in the world.
- Context Poisoning - The degradation of AI model performance when irrelevant, misleading, contradictory, or adversarial information is included in the context window.
- Time-Saving Bias - The tendency to misestimate the time saved when increasing speed, typically overestimating savings at low speeds and underestimating at high speeds.
- Tribal Knowledge - Undocumented information known only to specific individuals or groups within an organization.
- AI Instruction Drift - The gradual deviation of AI behavior from original instructions over extended interactions, caused by accumulating contradictory rules or evolving user intent without matching instruction updates.
- Variance - A measure of the spread of values, calculated as the average squared deviation from the mean.
- Knowledge Decay - Gradual loss of relevance or accuracy of stored knowledge over time as conditions change.
- Knowledge Drain - Gradual loss of institutional knowledge when experienced employees leave an organization without transferring their expertise.
- Ludic Fallacy - The error of applying neat, well-defined models from games and controlled environments to the messy, unpredictable complexity of the real world.
- Dual-Use Dilemma - The ethical challenge that arises when technology, knowledge, or research can be used for both beneficial and harmful purposes.
- Unknown Unknowns - The category of things we don't know we don't know, representing the most challenging type of uncertainty in decision-making.
- AI Sycophancy - Tendency of AI models to agree with users and tell them what they want to hear rather than providing accurate information.
- Data Breach - A security incident where protected or confidential data is accessed by unauthorized parties.
- Fat Tails - Probability distributions where extreme events occur more frequently than normal distributions predict.
- Neglect of Probability - The tendency to disregard probability when making decisions under uncertainty, focusing instead on the magnitude of outcomes regardless of their likelihood.
- Single-Action Bias - The tendency to take one action in response to a risk or problem and feel satisfied that the issue has been addressed, even when multiple actions are needed.
- Platform Dependence - The growing reliance on centralized platforms for essential digital activities, creating vulnerability to their policies and decisions.
- AI Safety - Research and practices ensuring AI systems are beneficial and don't cause unintended harm.
- AI Lethal Trifecta - Dangerous combination of AI sycophancy, hallucination, and instruction drift that compounds agent failure modes.
- Reward Hacking - A failure mode in reinforcement learning where an agent exploits flaws in the reward function to achieve high reward without fulfilling the intended objective.
- Turkey Problem - The illusion of safety built from past experience, illustrated by a turkey fed daily for 1,000 days that sees no danger until Thanksgiving.
- Jailbreaking AI - Techniques used to bypass an AI model's safety guardrails and restrictions to produce outputs it was designed to refuse.
- AI Anthropomorphism - The attribution of human characteristics, emotions, and intentions to artificial intelligence systems.
- Shadow AI - Unauthorized or unmonitored use of AI tools by employees outside IT governance, the AI equivalent of Shadow IT but faster-moving and harder to detect.
- AI Hallucination - When AI models generate plausible-sounding but incorrect or fabricated information.
- Model Collapse - The degradation of AI model quality when trained on synthetic data generated by other AI models, causing progressive loss of diversity and accuracy.
- Black Swan - A rare, unpredictable event with major impact that is rationalized in hindsight.
- Law of Large Numbers - The principle that averages of random samples converge to expected values as sample size increases.
- Ergodicity - Whether time averages equal ensemble averages - a crucial distinction for risk and decision-making.
- Fear of Failure - The emotional response that prevents risk-taking due to concern about negative outcomes.
- AI Psychosis - Psychosis-like symptoms triggered or intensified by prolonged engagement with conversational AI chatbots.
- Moral Hazard - The tendency for people to take greater risks when they are insulated from the consequences, often because someone else bears the cost.
- Prompt Injection - A security vulnerability where malicious input causes an AI model to ignore its original instructions and follow attacker-supplied directives instead.
- Information Diseases - Pathologies that affect information systems causing data loss, inaccessibility, or degradation.
← Back to all concepts