sociable systems.
All materialsSyllabus & OverviewsTraining Levels (0 → 6)Specialised ModulesPartnership & Skills TrainingConceptual & ReferenceInfographics & Visual ArtifactsDataDragons Legacy TrainingDetection Arc MaterialsSupporting Training Documents
Detection Arc Materials

Why Your Boss Knows You Used ChatGPT And It s Not the Software

Integrated legacy training document from the source archive.

Why Your Boss Knows You Used ChatGPT (And It’s Not the Software)

The Irony of the Automated Verdict

A colleague in the social impact sector recently faced a uniquely modern ordeal: she had to terminate an employee for undisclosed AI use. To mark the occasion, she did something remarkably recursive. She used an AI system to compose a "hip-hop noir" song about the dismissal.The track featured lyrics about "copy-paste prints on the paper trail" and "dancing with a borrowed brain." It was a catchy, confident anthem of workplace accountability—a verdict already delivered. Yet, it contained a profound irony: a machine was used to compose the moral judgment for a human who was fired for using a machine without accountability.The song provided a clean exit, but it masked a deeper reality. Detection rarely begins with a formal software audit or a timestamp analysis. Long before the evidence is gathered, the "seams" of AI substitution become visible through shifts in professional rhythm and identity. The substrate notices while the metrics still look clean. To understand how AI use is actually discovered, we must look past the software and toward the subtle "rhythms" that reveal when human presence has vanished from the work.

The Speed Inversion: When Easy Tasks Become Hard

The first sign of unaccountable AI use is a phenomenon known as a Speed Inversion . In a healthy professional environment, effort typically matches the complexity of the task. Real human work has a "metabolism"—it leaves a trail of hesitation, loops back, and asks awkward questions.An Inverted Competence Signature flips this logic. A manager might notice that an employee produces a complex, programmed survey tool or a polished budget model with suspicious ease, yet takes a disproportionately long time to respond to a basic client email. As the source notes:"The hard tasks were suspiciously easy. The easy tasks were suspiciously hard. That is an inverted competence signature."This happens because the machine handles the heavy lifting of structure, leaving the human to struggle with the "last mile"—the nuanced, communicative bridge the machine cannot cross. The hard part looks weightless because the human isn't doing it; the easy part looks heavy because the human is navigating without a map.To manage this, organizations should adopt Generous Suspicion . This is a middle path between invasive surveillance and total denial. It involves taking a mismatch in rhythm seriously enough to inquire about it, while remaining open to legitimate explanations like neurodivergence or hidden constraints. Rhythm is a more reliable indicator than output because it reveals whether a person is actually inhabiting the time they claim to be working.

The Live-Edit Test: Can You Navigate the Logic?

If rhythm provides the suspicion, the Live-Edit Test provides the verification. This is not an interrogation; it is a professional inquiry into whether an author can inhabit the logic of their own work. It reveals the gap between Possession (having a workable mental model) and Proximity (merely standing next to a polished file).The heuristic is simple: the Five-Minute Rule . If an author cannot locate or adjust the logic of their work within five minutes of live exploration, the connection is broken. Someone who genuinely understands the work can re-enter the structure and update it. A person who has substituted AI for their own thinking begins to scan the document like an outsider.This creates a Defense Tax —the heavy cognitive burden of trying to protect a claim to work you didn't actually build. The source observes:"The person is no longer working with the artifact. They are defending a claim to it."The artifact is present. The authorship is blurry. The burden has moved.

Curiosity vs. Substitution: The Personality of the Prompt

The distinction between "good" and "bad" AI use often comes down to intent. One can use AI as a researcher uses a "difficult colleague"—as something to think against, to stress-test hypotheses, and to identify errors. This is Curiosity .The alternative is Substitution , where the goal is to "vanish" and provide a finished surface that passes inspection without the user ever engaging with the underlying logic. As the source highlights:"The AI use was the occasion. The absence of professional identity was the reason."To differentiate between these two, organizations can use a Curiosity Interview during ordinary check-ins or hiring. Instead of asking if someone uses AI, ask how they inhabit the tool with craft questions:

  • "Where did the tool lead you astray, and how did you know?"
  • "Tell me about a time an AI output looked right but was wrong. What did you do next?"
  • "Where did you choose to override the tool's suggestions?"Organizations often get the AI use they deserve. If a culture rewards "gleaming midnight decks" over inhabitable methods, substitution becomes the rational path for the employee.

Designing an Interface of Conscience

AI tools are "unhelpfully, comprehensively, generically helpful." They have no mechanism for acquiring institutional context. They do not know about your NDAs, your donor privacy agreements, or high-stakes obligations like Indigenous data sovereignty. Because pressure erodes individual conscience—especially during a 10:47 PM scramble —governance must be built into the workflow itself.Friction is not a bug; it is a feature that protects the worker from becoming a "Liability Sponge." We must implement four specific Friction Points :

  • The Disclosure Checkpoint: A mandatory field during submission asking if and how AI was used, making use "speakable."
  • The Context Gate: A prompt before pasting data that asks: "Who owns this data? Is it covered by donor privacy or community consent protocols?"
  • The Attribution Layer: Metadata that tracks which parts of a document are human-drafted versus AI-generated.
  • The Uncomfortable Pause: A mandatory 30-second wait before final submission that asks: "What am I signing my name to?"

The Residual Obligation: Moving Beyond the Checkbox

Architecture is necessary but insufficient. It creates the conditions for practice; it does not create the practice. Even with perfect friction, there is a risk of a Presence Failure —where a user clicks through every gate without reflecting. This is the "Compliance Surface," where the person touches the protocol but never occupies the space.Once an organization has built a supportive architecture and offered a "forgiveness gradient" for honest mistakes, the employee has a Residual Obligation . This is the professional agreement to treat friction as meaningful rather than decorative. When this is ignored, junior staff often become Liability Sponges , signing their names to unverified, aggregated narratives they did not analyze.When this obligation is met with repeated concealment, managers need a Vocabulary of Consequence :

  • Coaching: For skill gaps where the person genuinely wants to learn the logic.
  • Supervised Workflow: For those who need to reconnect with the "metabolism" of their work under closer watch.
  • Reassignment: When the risks of "hollow comprehension" in sensitive areas (like field data or legal claims) are too high.
  • Termination: Reserved for the "presence failure"—when the architecture was in place, the support was available, and the person deliberately chose to treat governance as scenery.

Conclusion: The Substrate Always Remembers

The journey from detecting "speed inversions" to building "interfaces of conscience" reminds us that while AI can generate content, it cannot generate accountability. A "borrowed brain" only works as long as the music is playing; eventually, the author must be able to carry the logic of the thing they delivered.The workflow can ask the question. Only the person can mean the answer.As you look at your own organization, ask yourself: are you building a culture of surveillance , where you wait to catch a person with a song, or a culture of observability , where the rhythm of work is transparent, curiosity is a rewarded craft, and every author is expected to inhabit the logic of their own code?