governance - Concepts
Explore concepts tagged with "governance"
Total concepts: 46
Concepts
- AI Usage Policy - Organizational rules governing how employees can use AI tools, what data can be shared with AI systems, which tools are approved, and what use cases are prohibited.
- Enterprise AI Deployment - The practical discipline of rolling out AI tools, agents, and context management across an organization, addressing infrastructure, access control, compliance, training, and change management.
- AI Agent Permissions - Controls governing what actions, tools, files, and resources AI agents can access, enforcing the principle of least privilege in agentic AI systems.
- Risk Management - The systematic process of identifying, assessing, prioritizing, and mitigating risks to minimize their negative impact.
- Four Eyes Principle - Control mechanism requiring two people to approve critical actions, preventing unilateral decisions
- AI Context Governance - Policies and practices for managing who can create, modify, and distribute AI context.
- Enterprise Context Management - Organization-wide governance and coordination of AI context across departments and teams.
- Residual Risk - The level of risk that remains after risk mitigation controls and treatments have been applied.
- Quad Pattern - A documentation pattern using four complementary document types: rules, processes, requirements, and references.
- AI Fairness - The study and practice of ensuring AI systems produce equitable outcomes and do not discriminate against individuals or groups based on protected characteristics.
- Risk Register - A structured document that records identified risks along with their analysis, treatment plans, and current status.
- Context Provenance - Tracking the origin, authorship, and modification history of context information.
- Risk Culture - The shared values, beliefs, attitudes, and behaviors within an organization that shape how risk is identified, assessed, and managed.
- Risk Appetite - The level and type of risk an organization or individual is willing to accept in pursuit of their objectives.
- AI Governance - The frameworks, policies, and oversight mechanisms that guide the responsible development, deployment, and regulation of artificial intelligence systems.
- AI Guardrails - Safety constraints and boundaries built into AI systems to prevent harmful or undesired outputs.
- Decentralization - Distributing control, data, or operations across multiple independent nodes rather than centralizing.
- Dual-Use Dilemma - The ethical challenge that arises when technology, knowledge, or research can be used for both beneficial and harmful purposes.
- AI Transparency - The principle that AI systems should operate in ways that are open, understandable, and inspectable, allowing stakeholders to understand how decisions are made.
- Project Charter - A foundational document that formally authorizes a project and defines its scope.
- Information Lifecycle Management - A comprehensive approach to managing data through all stages from creation to disposal based on its value and requirements.
- Subsidiarity - The principle that decisions should be made at the lowest competent organizational level, closest to those affected.
- Separation of Duties - Security principle requiring multiple people to complete critical tasks, preventing fraud and errors by one individual
- Digital Rights - The human rights and freedoms that apply to people's use of digital technologies, including privacy, expression, and access.
- AI Accountability - The principle that individuals, organizations, and institutions must be answerable for the development, deployment, and outcomes of AI systems.
- AI Privacy - The set of concerns around what happens to personal and sensitive data when using AI platforms, encompassing data collection, retention, training use, and third-party access.
- Key Risk Indicators - Quantitative metrics used to monitor and provide early warning signals about changes in risk exposure.
- AI Safety - Research and practices ensuring AI systems are beneficial and don't cause unintended harm.
- Inherent Risk - The level of risk present in an activity or process before any controls or mitigation measures are applied.
- BDFL - Benevolent Dictator For Life - a title for open source project leaders who retain final decision-making authority.
- Shadow AI - Unauthorized or unmonitored use of AI tools by employees outside IT governance, the AI equivalent of Shadow IT but faster-moving and harder to detect.
- Operational Resilience - An organization's ability to prevent, adapt to, respond to, and recover from disruptions to continue delivering critical services.
- Risk Tolerance - The acceptable level of variation in outcomes that an organization or individual is willing to withstand.
- Decision-Making Power - The authority and ability to make choices that affect outcomes within organizations and systems.
- AI Data Security - Protecting sensitive data when using AI systems, where every interaction including prompts, uploaded files, tool call results, and agent memory is a potential data exposure point.
- AI Watermarking - Techniques for embedding detectable signals in AI-generated content to enable identification of its synthetic origin.
- Accountability Principle - The requirement that organizations not only comply with data protection rules but must also demonstrate their compliance through documentation and evidence.
- Benevolent Dictator - A governance model where a single leader retains final authority but exercises it for the collective benefit.
- Responsible AI - A comprehensive framework for developing and deploying AI systems that are ethical, transparent, fair, accountable, safe, and beneficial to society.
- Data Retention Policy - A set of rules defining how long different types of data should be kept and when they should be deleted.
- AI Cost Management - Strategies for monitoring, optimizing, and controlling the financial costs of running AI systems in production.
- Data Ownership - The concept of having property-like rights over data you create or that pertains to you.
- Enterprise Risk Management - A holistic approach to managing all types of risk across an organization in an integrated and strategic manner.
- AI Explainability - Methods and techniques for making AI decision-making processes understandable and interpretable by humans.
- Security Audit - A systematic evaluation of an organization's security posture against established standards and policies.
- AI Oversight - The governance mechanisms, processes, and institutions designed to monitor, evaluate, and regulate AI systems throughout their lifecycle.
← Back to all concepts