You guys, this one hurt. Let me tell you about the two days I spent stuck in one of the most embarrassing debugging loops I've ever experienced. The auto-scatter hook — the tool I built to make every prompt better — started corrupting the prompts I was using to fix the auto-scatter hook.
It intercepted itself.
The scatter hook was live and working. Every raw prompt going through Claude Code was being intercepted, decomposed into sinc JSON, and injected back as structured context. The exchange rate was already dropping. Things were looking good.
Then I noticed some weird behavior in the hook. A subtle issue with how it handled very short prompts — it was over-inferring context and producing CONSTRAINTS bands that were too restrictive. I wanted to fix it.
So I opened Claude Code and typed: "Fix the scatter hook — it's being too aggressive with CONSTRAINTS inference on short prompts."
The hook intercepted this prompt. Decomposed it. Produced a structured interpretation. That interpretation — shaped by the exact bug I was trying to fix — told the main model that the task was something slightly different from what I'd asked. The model made a change. The wrong change. The hook was now more broken.
I tried again. "Actually revert that — now the hook isn't inferring any CONSTRAINTS at all for one-word prompts."
The hook intercepted this prompt. Decomposed it incorrectly (still broken). Gave the model a bad picture. The model made another wrong change.
I was in a loop. Every attempt to fix the hook was being filtered through the hook's broken interpretation before reaching the model.
The self-referential trap is a known hazard in systems that process their own input. But I hadn't thought about it for the scatter hook because at the time I built it, it seemed obviously safe. "The hook processes prompts. It doesn't process the code that defines it." True — but I was using Claude Code to modify that code, and Claude Code sends everything through the hook first.
The fix I tried first: add a special escape character. Prefix your prompt with "!!" to tell the hook to pass through. This was a bad idea. It required me to remember the escape character every time, and I kept forgetting. Then when I forgot, I'd be back in the loop.
The actual solution was to change what the hook considers a "pass-through" condition. Instead of a magic escape character, I made the pass-through rule structural: if the input is already valid sinc JSON (has the formula field, the T field, and the fragments array), the hook passes it straight through without scattering.
def is_sinc_json(text: str) -> bool:
try:
obj = json.loads(text.strip())
return (
"formula" in obj and
"T" in obj and
"fragments" in isinstance(obj.get("fragments"), list)
)
except:
return False
Now, if I need to communicate directly with the model about the hook itself — bypassing the hook's interpretation — I manually write sinc JSON. I construct the 6 bands myself, making CONSTRAINTS explicitly say "do not modify the scatter_server.py CONSTRAINTS inference logic", and send that raw JSON. The hook sees it's already structured, passes it through, and the model gets exactly what I wrote.
The catch-22 was genuinely useful to hit early. It forced me to build a proper escape mechanism instead of assuming the hook would always be benign. Any interceptor that operates on all traffic needs a way to bypass itself. Otherwise you can't fix it when it breaks without external tooling.
The sinc JSON pass-through rule also turned out to have a second benefit: sub-agent communication. When I have agents delegating tasks to other agents, the delegating agent can construct a fully structured sinc JSON prompt and hand it off. The receiving agent's hook sees it's already structured and doesn't re-scatter it. "MUST NOT double-scatter" became a formal rule in my system documentation because of this experience.
Two days of pain. One elegant structural fix. The hook has been stable since.
Try sinc-LLM free — sincllm.com
The hook code includes the pass-through rule. Leave a comment for the link.