Operational AI Governance, in practice
A library of core modules that recombine into audience-specific tracks. The translation toggle moves between ethical theory and legal contract language — so every module ends in something you can actually put into a vendor contract.
Tracks
For corporate ESG officers integrating AI risk into existing governance.
For auditors who need to defend their findings against AI-augmented pushback.
How to use AI in social research without compromising methodological integrity or community trust.
Core Modules
Ten atoms. Each module appears in multiple tracks, tagged so it surfaces wherever it is contextually useful.
Stop-work authority. How junior staff get scapegoated, and how to architect around it.
Reading metrics critically. Where the failing 6% actually lands.
Red-teaming reports from the perspective of the populations being assessed.
AI-use guidelines for associates and contractors. What to document, how, why.
Six contractual mechanisms. Procurement-ready clauses.
Ten questions to ask before signing off on an automated system.
Moving between ethical theory and legal contract language without losing either.
Why safety claims that work in the lab fail in muddy-water field conditions.
Designing systems that can fail safely. Negative power only.
Re-architecting grievance flows to refuse unsafe continuation.