sociable systems.
All materialsSyllabus & OverviewsTraining Levels (0 → 6)Specialised ModulesPartnership & Skills TrainingConceptual & ReferenceInfographics & Visual ArtifactsDataDragons Legacy TrainingDetection Arc MaterialsSupporting Training Documents
Detection Arc Materials

Study Guide AI Detection Governance and Professional Identity

Integrated legacy training document from the source archive.

Study Guide: AI Detection, Governance, and Professional Identity

This study guide explores the "Detection Arc," a series of analyses concerning the integration of Artificial Intelligence in the workplace, specifically within the social impact and research sectors. It examines the shift from viewing undisclosed AI use as a simple disciplinary matter to understanding it as a failure of institutional architecture and professional identity.

Part 1: Short-Answer Quiz

Instructions: Answer the following questions in two to three sentences based on the provided text.

  1. What is a "speed inversion" in the context of AI detection?
  2. How does the "live-edit test" distinguish between possession and proximity?
  3. Define the "defense tax" as it relates to workplace deliverables.
  4. What is the difference between "curiosity-driven" and "substitution-driven" tool use?
  5. Explain the concept of "generous suspicion" in management.
  6. Why is individual conscience considered the least reliable component of a governance system?
  7. What are the "four friction points" suggested for building conscience into an interface?
  8. Distinguish between "compliance" and "inhabitation" regarding institutional architecture.
  9. What does it mean for a junior researcher to become a "liability sponge"?
  10. What is the purpose of the "Calvin Convention" in procurement and M&E?

Part 2: Answer Key

  1. Speed inversion refers to a mismatch where a worker completes complex, machine-legible tasks with suspicious ease while struggling disproportionately with simple, human-facing tasks. This "inverted competence signature" suggests that an AI system is performing the heavy lifting while the human struggles with the parts the system cannot reach.
  2. The live-edit test requires an individual to modify or explain the logic of an artifact in real time to see if they have a workable mental model of the work ( possession ). If the person must stall or reverse-engineer the logic because it never settled in them, they only have proximity to the work rather than ownership.
  3. The defense tax is the cognitive burden and exhaustion resulting from an individual trying to protect their claim of ownership over work they do not actually understand. It manifests as delay, vagueness, and an over-managed composure as the person attempts to appear attached to the logic of the artifact.
  4. Curiosity-driven use involves using AI as a "sparring partner" to stress-test ideas or explore drafts while remaining open about the process and limitations. Substitution-driven use is concealed and performance-oriented, aiming to provide a finished surface that replaces human thinking and removes the worker's presence from the task.
  5. Generous suspicion is a management stance that takes rhythmic mismatches seriously enough to inquire about them while remaining open to non-punitive explanations like overload or hidden constraints. It seeks to start a "calibration conversation" that surfaces methods and tool use rather than moving immediately to an indictment.
  6. Individual conscience is unreliable because it is easily eroded by factors like fatigue, deadline pressure, and misaligned incentives, especially in "10:47 PM" scenarios. High-pressure environments make the wrong choice frictionless, meaning governance cannot rely on memory or character alone but must be built into the workflow itself.
  7. The four friction points are the disclosure checkpoint (declaring AI use at submission), the context gate (verifying data ownership before pasting), the attribution layer (metadata tracking provenance), and the uncomfortable pause (a mandatory 30-second wait to reflect on what is being signed).
  8. Compliance is a surface behavior where an individual follows rules or ticks boxes without reflecting on the underlying principle, often to "pass through" the turnstile of governance. Inhabitation occurs when a person is actually present for the mechanism, treating friction as a genuine moment of inventory and professional accounting.
  9. A liability sponge refers to a junior team member who uses AI to synthesize field notes or data and then signs their name to the output without full verification. When the AI-generated summaries are flawed or omit critical nuances, the junior staff member absorbs the legal and professional liability for a result they did not actually produce.
  10. The Calvin Convention consists of non-negotiable procurement clauses that guarantee an organization's right to audit a vendor's algorithm and interrogate their models. It ensures that evaluators are not merely "taking dictation" from proprietary systems but have the legal authority to demand evidence access and halt processing if needed.

Part 3: Essay Questions

Instructions: Use the themes from the Detection Arc to prepare responses for the following prompts.

  1. The Architecture of Silence: Analyze how a workplace culture that rewards "immaculate surfaces" and speed over "inhabitable method" inadvertently encourages undisclosed AI substitution.
  2. The Governance of Relationship: Compare and contrast an AI policy that governs the "instrument" (the tool) versus one that governs the "relationship" (the worker's interaction with the tool).
  3. The Residual Obligation: Discuss the ethical shift that occurs when an institution provides robust governance architecture (friction, disclosure fields, and pauses). What is the individual’s responsibility once the system has "done its work"?
  4. Rhythm as Evidence: Explore the concept of "metabolism" in human effort. Why is tempo a more reliable indicator of authentic work in an AI-saturated environment than the final polished deliverable?
  5. The Social Impact Risk: Evaluate the dangers of using commercial AI for qualitative community research (M&E). Focus on the "erasure of nuance" and the "sanitization of grievances" as outlined in the Watchdog Curriculum.

Part 4: Glossary of Key Terms

  • Attribution Layer: A technical intervention involving metadata that tracks which parts of a document are human-drafted, AI-generated, or collaborative.
  • Calvin Convention: A set of procurement rules for M&E and social research that guarantees the right to audit algorithms, demand evidence access, and interrogate model logic.
  • Context Gate: A workflow intervention that asks a user about data ownership and confidentiality agreements before they are permitted to paste data into an external AI tool.
  • Curiosity Interview: A hiring or checkpoint technique that asks candidates how they have used AI, specifically focusing on where the tool failed them or where they had to override it.
  • Defense Tax: The cognitive cost of attempting to defend and maintain a claim of authorship over an artifact that the individual does not fully comprehend.
  • Disclosure Checkpoint: A mandatory field in a submission workflow where a worker must declare if and how AI assistance was used for a specific deliverable.
  • Five-Minute Rule: A heuristic suggesting that if a person cannot locate or explain the logic of their work within five minutes of live exploration, there is a failure of "possession."
  • Generous Suspicion: An observational discipline that treats workflow anomalies as an invitation to a calibration conversation rather than a verdict of misconduct.
  • Inhabitation: The act of being professionally present and reflective during governance rituals (like pauses or checkpoints) rather than just performing surface compliance.
  • Liability Sponge: A situation where subordinates take the blame for flawed AI outputs because they signed off on summaries or data they did not verify.
  • Live-Edit Test: A diagnostic exercise where an author is asked to make real-time modifications to a finished artifact to demonstrate their understanding of its internal logic.
  • Missing Signal: The absence of a mechanism in a workflow that asks "should you be doing this?" or flags privacy risks at the moment of decision.
  • Presence Failure: A breach of trust where an individual bypasses all institutional safeguards (friction, pauses, disclosure) without engaging their conscience or professional identity.
  • Residual Obligation: The professional agreement to treat institutional architecture as meaningful and to be honest within the slots the interface provides for transparency.
  • Speed Inversion: A detection signal where easy tasks take disproportionately long while difficult, machine-ready tasks are completed with uncanny speed.
  • Stop Work Authority: The defined threshold or power within a research team to halt data processing if an AI model’s abstractions become too detached from community realities.
  • Substitution: The use of AI to replace human presence and thinking while preserving the appearance of solitary human authorship.
  • Uncomfortable Pause: A mandatory wait period before final submission designed to give the human "substrate" a moment to notice if something about the work is "off."
  • Victim Register: A framework used to stress-test research reports by identifying who pays the price when community grievances are sanitized or diluted by AI-generated summaries.