I keep saying $1,588.56 saved in one week. I keep saying 21,194 prompts, 4.2 to 1.6 exchange rate, $42 Haiku overhead. People ask: can you prove it?
Yes. Here's the full data dump from my measurement system. These are the actual log numbers, not estimates. I'll walk through each one.
Exchange rate = total assistant responses / total user prompts. Simple ratio. For the baseline (pre-hook), I looked at the previous week's logs. Same workflow, same agent pipelines, no scatter hook. 4.2 assistant responses per user prompt on average.
During the 7-day test period with the scatter hook active, 21,194 user prompts generated 33,133 assistant responses. That's 1.56 — and it was still declining through the week as the hook warmed up and I tuned the scatter template. I project it stabilizes around 1.4-1.6 at full steady state.
The $2,597.96 projected cost isn't guessing. It's the actual API cost calculation applied to the 7-day token volume at a 4.2 exchange rate. I took the actual number of distinct tasks I completed in 7 days, multiplied by average tokens-per-task at 4.2 exchanges, and applied current API pricing. The $967.01 actual cost is from real billing data.
The difference — $1,588.56 — is real. It's reflected in my Anthropic billing dashboard. I'm not extrapolating from a small sample.
SNR (signal-to-noise ratio) of prompts: 0.003 before scatter, 0.855 after. This is the ratio of specification-relevant content (on-topic information in all 6 bands) to noise (ambiguity, underspecification, missing bands). I measured this across 275 prompt-response pairs manually rated for quality alignment.
0.003 is near-zero signal. My raw prompts were almost entirely noise from the model's perspective — so ambiguous and underspecified that the model had to generate clarifying questions to make progress. 0.855 is near-perfect signal — the model has almost everything it needs to answer correctly on the first try.
Because extraordinary claims require evidence. "I saved $1,588 in one week" sounds like marketing. I want you to have the numbers, understand the methodology, and be able to verify it yourself if you implement the hook.
The logging system that generated these numbers is part of the open-source package. You can run it on your own workflow and get your own measurements. Leave a comment and I'll send the GitHub link.
Try sinc-LLM free — sincllm.com
Full measurement code included. Verify the numbers yourself.