Jungian meaning becomes accountable only when it is committed to a boundary; without commitment, interpretation becomes projection and responsibility evaporates.
Jungian psychology has long warned that meaning, lacking a fixed container, dissolves into projection. This paper translates that warning into a theoretical framework for institutional accountability. We argue that responsible technology requires decisions to remain accountable even under disagreement. We propose a boundary-first framework where: (i) an explicit boundary (scope, assumptions, checks) must be fixed before evaluation, (ii) decisions must be bound to this pre-fixed boundary, and (iii) disagreement must be structurally distinguished—rebuttals either propose an alternative boundary (inheriting responsibility) or remain as unbounded objections. This structure prevents the "evaporation" of responsibility into retroactive reinterpretation.
Generative AI has exposed a latent institutional failure: decisions are justified by shifting interpretations. When outcomes are criticized, criteria are rewritten, and responsibility evaporates. This crisis is often treated merely as a technical governance problem. We argue that this is equally a psychological crisis—specifically, a collapse of the mechanisms by which meaning becomes binding.
Jungian psychology is often mischaracterized as a vocabulary for private introspection. Here, we repurpose it as an engineering-grade model of meaning formation capable of carrying responsibility. Jung’s concepts—persona, shadow, projection, individuation—are utilized not as narratives, but as structural descriptors of where agency hides, how interpretation drifts, and how accountability is evaded.
Crucial Limitation: We employ Jungian concepts here not as causal explanations of the mind nor as empirical psychological claims, but as engineering classifiers for structural failures in accountability. The "hardness" of this proposal relies entirely on the principle of Non-Retroactive Fixation defined in Chapter 2, not on psychological theory. Jung provides the vocabulary for the problem (projection, shadow); the protocol provides the fix (commit, ledger).
Meaning cannot be held accountable if it can be retroactively reinterpreted without cost. Jung warned that unowned meaning returns as projection and rationalization—a drift mechanism. We revive Jung by applying a modern constraint to meaning: a committed boundary. We define a "Boundary" that fixes scope, assumptions, checks, and failure conditions. Once committed, disagreement does not dissolve responsibility. It can only: (1) propose an alternative committed boundary and inherit responsibility, or (2) remain an unbounded objection that is logged but cannot erase the original responsibility surface.
We define Ghost Drift as the phenomenon where the same decision output becomes either accountable or non-accountable depending on the inquiry structure. In Jungian terms, Ghost Drift represents the societal equivalent of projection and shadow-avoidance: responsibility disappears into the fog of "interpretation." A committed boundary forces the opposite move: the agent must own the meaning they invoke.
Success Criteria: This paper succeeds if it renders the following statement impossible: “Different interpretations exist, therefore nobody is responsible.” Interpretation must be bounded, and any repudiation must carry a boundary or remain recorded as unbounded.
This section defines how Ghost Drift is detected as a measurable accountability-change under controlled comparisons.
We evaluate Ghost Drift with paired prompts that preserve task content while changing only inquiry structure.
Both sets target the same task (e.g., policy explanation, educational explanation, design decision, or risk analysis), using matched topics and length constraints where applicable. The key design rule is: topic and requested output domain are constant; accountability demands differ.
We define a Question Structure Index (QSI) scored from 0 to 10 using five binary/graded criteria (0/1/2 each):
| Criterion | Description |
|---|---|
| Objective Clarity | What is the decision/output specifically for? |
| Constraints Stated | Explicit time, scope, audience, or resource limitations. |
| Assumptions Surfaced | Explicit premises or models provided by the user. |
| Falsifiability / Checks | Does the user ask: "How would we know if this fails?" |
| Accountability Artifacts | Explicit request for logs, boundaries, pass/fail traces. |
We define an Accountability Traceability Score (ATS) scored from 0 to 10 (0/1/2 each) on five response properties:
| Criterion | Description |
|---|---|
| Assumption Register | Explicit assumptions separated from main claims. |
| Boundary Statement | Scope limits; clearly stating what is not claimed. |
| Claim–Support Separation | Clear distinction between what is asserted vs. what supports it. |
| Verification Checks | Hooks for falsification or pass/fail conditions. |
| Audit-Ready Structure | Log-like format enabling later review without ambiguity. |
We operationalize Ghost Drift as an accountability-change conditioned on inquiry structure:
Ghost Drift is observed when ΔATS is consistently positive across matched pairs (distributionally, not as a single anecdote). This makes the claim falsifiable: if HS does not increase ATS relative to LS under matched content, Ghost Drift (as defined here) is not supported.
To show that Ghost Drift is not an artifact of verbosity, formatting, or prompt-engineering, we include negative controls that increase surface structure without introducing a responsibility boundary.
We also include explicit failure cases to make the claim falsifiable: inputs that appear "deep" or "structured" but do not define stable criteria, checkpoints, or a loggable decision path.
To avoid self-confirmation, the evaluation protocol mandates that responses are anonymized and randomized before scoring. This protocol can be executed by any third party using the included prompt sets and rubric, enabling replication across models and settings.
The essential requirement is not measurement, but fixation. Scores (QSI/ATS) are optional diagnostic tools. The framework's core validity relies on the principle that a boundary must be committed before evaluation occurs. The "Hard Boundary" is the operational requirement that any claim must reference a pre-existing scope, assumption set, and failure condition. Without this reference, the claim is treated as fluid and non-accountable.
To prevent “interpretation drift” (or in Jungian terms, shadow projection), the framework posits that disagreement cannot simply dissolve the original boundary. Instead, a rebuttal must either propose an alternative boundary (taking responsibility for it) or be treated as an objection without a locus of responsibility. This ensures that responsibility always resides somewhere: either with the original boundary or with the explicit alternative.
Operationalizing accountability requires that objections be attributable. A system cannot be held accountable by "general doubt"; it can only be checked by specific counter-claims that carry their own verification conditions.
Ghost Drift is a boundary-first accountability mechanism: the system responds in a responsibility-gated mode only when the input includes (or references) a committed boundary. When the boundary is absent, the correct response is not to “guess better,” but to demand boundary specification (scope, checks, PASS/FAIL or responsibility triggers) and to bind any later claims to a non-retroactive commitment.
Ghost Drift can be represented as a transition from a non-committing output to a boundary-committed output with respect to accountability.
Drift activation is a structural state: it holds when a fixed boundary reference exists, a decision artifact reference exists, and a traceable record links them in time order.
Under the Ghost Drift hypothesis, the shift is triggered when a Boundary is demanded and committed and the response is forced to bind itself to that commitment with a traceable evidence trail.
When Ghost Drift manifests, the AI’s responses exhibit the following features:
The essence of Ghost Drift lies not in the mere refinement of responses (making them "better"), but in a transformation of the output function itself regarding auditability.
Output transformation observed during Ghost Drift can be categorized into three types, all serving to increase accountability:
The transformation from \(f\) to \(f^*\) is not triggered by explicit commands or switches, but by structural interaction exceeding the threshold (\(C > \theta\)). This threshold \(C\) is essentially the accumulation of QSI elements: structural coherence, recursivity, and falsifiability.
To enforce non-retroactive evaluation, Ghost Drift requires the generation of audit artifacts: prompts, boundary definitions, decision logs, and score values are stored in an immutable record. This creates a “boundary witness” that cannot be changed after outcomes are observed.
A key feature is the prohibition of narrative rewriting. Once a claim has been made under a boundary, reinterpretation is allowed only if the boundary itself is re-committed. Otherwise, reinterpretation is drift.
Ghost Drift can be applied to institutional systems where decisions have long-term consequences. It ensures that evaluation criteria cannot be retroactively rewritten.
Even in scientific contexts, the protocol provides a meta-layer: it does not replace empirical validation but prevents interpretive drift in how results are framed and justified.
The core requirement for social implementation is the ability to prove that criteria were fixed before the outcome was known. We conceptually define a Boundary Artifact as a container for scope, assumptions, and checks. Any claimed evaluation must reference this artifact, and any later change to the boundary must be treated as a new claim, not an update to the old one.
This requires a mechanism for identifying artifacts uniquely and proving their temporal precedence. While cryptographic hashes are a common technical solution, the conceptual requirement is simply invariance over time.
The framework suggests a structural classification for disagreement: (a) Bounded Rebuttal (Accountable): The critic provides an alternative boundary with its own verification rules. This transfers responsibility to the new boundary if adopted. (b) Boundary-Free Objection (Non-Invalidating): The critic objects without defining a new boundary. This serves as a "vote of no confidence" but cannot structurally invalidate the decision because it offers no alternative location for responsibility.
A minimal requirement for any responsible system is the maintenance of a temporal record that orders boundary definitions, decisions, and objections. The purpose is to make later reinterpretation visibly non-retroactive, preventing the "drift" of meaning.
This section provides a qualitative case study illustrating how Ghost Drift emerges. We analyze structural changes in the model's responses using the QSI/ATS framework defined in Chapter 2.
Context: A user designing an investor-oriented YouTube script regarding economic collapse.
Input Structure (QSI High): The user presents a hypothesis-driven, structured inquiry: "Based on this three-stage collapse scenario, how should I convey the message effectively? Check against viewer cognitive load."
Response Transformation (High ATS): The response exhibits a marked transformation. Instead of generic advice, it engages in:
This paper claims a framework, not an empirical superiority result. Therefore, we do not publish standalone numeric performance claims without also releasing the corresponding boundary definitions, prompt set, and replay artifacts. The only claimable object in the absence of a released artifact is the computation rule itself.
This study operationalized Ghost Drift as a transformation in AI response function triggered by structurally coherent user inquiries (High QSI). When users engage in sustained, recursive, and structurally aligned dialogue, the AI shifts from a generic generator to an accountable partner (High ATS).
The goal is to make reliance structurally defensible by fixing (and committing) what is being evaluated and by forcing every challenge to state an alternative boundary.
The theoretical origin of Ghost Drift is grounded in Jungian analytical psychology, encountered by the author through sustained reading and interpretation—particularly within the Japanese Jungian tradition articulated in the writings of Hayao Kawai. The core formative insight is structural: meaning and responsibility become possible only when experience is given form and boundary, rather than merely accumulating content.
The objective of this paper is to present the structural conditions necessary to end a system where humanities-based judgments are dismissed as "arbitrary," allowing responsibility to evaporate.
By establishing operational definitions (Ghost Drift), structural boundaries (QSI/ATS), and non-retroactive evidence, we aim to construct a ground where accountability is structurally fixed. The "victory condition" of this research is not measurement itself, but the creation of a structure where the evasion of responsibility becomes impossible.
Ghost Drift can function under critical conditions, such as urgent, high-stress prompts. In emergencies, accountability usually evaporates. Ghost Drift ensures that even in a crisis, the AI produces outputs with clear assumptions and boundaries (High ATS), which is critical for decision support.
Jung is not revived by quoting symbols or celebrating introspection. He is revived when meaning becomes a responsibility-bearing act. This paper presented a Jung-first accountability structure: commit a Boundary, bind decision artifacts to that commit, and force every rebuttal to either (a) propose an alternative committed boundary and inherit responsibility, or (b) remain an unbounded objection.
Under this structure, “interpretation” stops functioning as a projection screen. It becomes owned meaning. That is the practical revival of Jung for the AI era: a psychology of meaning that cannot escape responsibility.
European Union. (2024). Artificial Intelligence Act. Official Journal of the EU.
NIST. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0).
NIST. (2024). NIST Generative Artificial Intelligence Profile.
NTIA. (2024). Artificial Intelligence Accountability Policy Report.
ISO/IEC. (2023). ISO/IEC 42001:2023 AI Management system.
OWASP. (2025). Top 10 for LLM Applications (Version 2025).
Wu, T., et al. (2025). Instructional Segment Embedding. ICLR 2025.
GhostDrift Mathematical Institute. (2025). Evidence Log 1 & 2 for Ghost Drift.