I Wasted 80,000 Tokens Because I Forgot to Point at the Line

March 25, 2026 · 8 min read · prompt-engineering sinc-llm constraints genie-metaphor

Contents

  1. What I typed — the exact prompt
  2. What happened for 80,000 tokens
  3. The diagnosis: four missing bands
  4. Before and after: the same task, rewritten
  5. The math of a missing CONSTRAINTS band
  6. The lesson I cannot unlearn
x(t) = Σ x(nT) · sinc((t − nT) / T)
The sinc-LLM framework applies Nyquist-Shannon sampling to prompt engineering. Each band is a frequency sample of intent.

What I typed — the exact prompt

I will give you the exact words I typed because precision matters here. The prompt was:

"Change the line where the timeout is set to use the environment variable instead of the hardcoded value."

Twenty words. One sentence. One apparent task. I typed it in about four seconds, hit Enter, and went to make coffee. By the time I came back, the context window was nearly full.

That prompt cost 80,000 tokens. The change I wanted would have taken 3 tokens to implement: replace one string literal with one variable reference. The ratio of tokens spent to tokens needed was approximately 26,666 to 1.

I did not fully understand what had gone wrong until I sat down and dissected the prompt word by word. That dissection became the foundation of everything I now know about prompting.

What happened for 80,000 tokens

The model started reading files. It found a config.py with a timeout field. It found a client.py with a timeout argument on an HTTP request. It found a db.py with a connection pool timeout. It found a queue.py with a retry timeout. It found a server.py with a socket timeout. It found constants in constants.py and a settings object in settings.py.

For each one it ran an analysis: is this hardcoded? Is there an environment variable pattern already established? Should I change this one? Should I change all of them? The model was not malfunctioning. It was doing exactly what I had asked — finding "the line where the timeout is set" — and there were eleven such lines in my codebase.

Halfway through the context window the model began drafting changes. It changed three of the eleven. Then it reconsidered. It rolled back one. It checked for imports. It added a os.environ.get call where one did not exist. It wrote a helper function. I had not asked for a helper function. I had not asked for it to touch imports. I had asked for "the line" — singular — and the model, finding eleven candidates, had chosen to be thorough rather than to ask.

This is what thoughtful general-purpose completion looks like when the wish is vague: maximum coverage, minimum precision, enormous token spend.

The diagnosis: four missing bands

After the incident I ran my original prompt through what would later become the sinc diagnostic. I asked: which bands are populated, and which are empty?

PERSONA: empty. The model did not know what kind of engineer I am, what my codebase conventions are, whether I prefer explicit environment variable handling or a helper abstraction.

CONTEXT: empty. No file specified. No function specified. No information about which timeout was the one I meant. No context about the size or structure of the codebase.

DATA: empty. No mention of the specific variable name, the current hardcoded value, the environment variable name I intended to use, or the pattern used elsewhere in the project.

CONSTRAINTS: empty. No instruction to touch only one file. No instruction to leave all other files unchanged. No instruction not to write helper functions. No instruction not to change imports. No instruction to ask rather than guess if ambiguous.

FORMAT: implied but absent. No instruction about whether to show a diff, a full file replacement, or an inline edit with explanation.

TASK: present, but dangerously underspecified. "Change the line" with no pointer to the line.

Five of six bands were empty or nearly so. I had given the model a task and nothing else. The model filled all the empty bands with its best guess. That guessing cost 80,000 tokens.

80,000 Tokens spent
3 Tokens needed for the actual change
5/6 Bands that were empty
42.7% Quality weight of CONSTRAINTS alone

Before and after: the same task, rewritten

Here is the prompt I should have written:

PERSONA: Senior Python engineer, this codebase, no surprises.

CONTEXT: Pipeline project. File: queue.py, function: _build_client(),
line 47. Current value: timeout=30. Env var: QUEUE_TIMEOUT (int, seconds).
Pattern used elsewhere: int(os.environ.get('VAR_NAME', default)).

CONSTRAINTS:
- Touch ONLY queue.py, ONLY line 47.
- Do NOT modify any other file.
- Do NOT add helper functions or abstractions.
- Do NOT change imports if the os module is already imported; add it
  only if missing.
- Do NOT touch any other timeout in this or any other file.
- If the os module is not imported, add ONLY: import os

FORMAT: Show the diff for line 47 only. One before line, one after line.

TASK: Replace the hardcoded integer 30 on line 47 of queue.py with
int(os.environ.get('QUEUE_TIMEOUT', 30)).

That prompt is longer in words. It is dramatically shorter in tokens spent, because the model executes it in a single pass — finds the file, finds the line, makes the change, stops. The output is a 2-line diff. Total spend: under 500 tokens. Savings: 99.4%.

The key difference is the CONSTRAINTS band. Every line of it is a wall around something the model would otherwise be tempted to do. Without those walls, the model explores freely. With them, it moves in a perfectly bounded corridor from prompt to output.

The math of a missing CONSTRAINTS band

I measured this across 275 prompts after building the sinc framework. CONSTRAINTS carries 42.7% of output quality — the single largest band by far. The second largest is FORMAT at 26.3%. TASK, which most people assume is the most important part, carries only 2.8%.

This surprised me when I first calculated it. It should not have. CONSTRAINTS is where you specify everything that cannot happen. A model without constraints is a bounded optimization over a massive search space. A model with explicit constraints is a bounded optimization over the intersection of the constraint set and the search space — which is almost always tiny by comparison.

The math is multiplicative, not additive. The sinc SNR formula is:

SNR = 0.588 + 0.267 · G(Z1) · H(Z2) · R(Z3) · G(Z4)
Each populated band multiplies signal quality. An empty CONSTRAINTS band drives the product toward zero regardless of other bands.

If CONSTRAINTS is empty, the product is zero. It does not matter how good your TASK description is. The Genie has no walls, no limits, no boundaries — and it will explore the entire space of things that technically satisfy your task wording. That exploration is what my 80,000 tokens paid for.

The lesson I cannot unlearn

I think about that incident every time I open a new prompt now. Before I type the task, I ask: what are the walls? What am I implicitly assuming the model will not do? What is the scope limit I have in my head but have not written down?

Those unstated assumptions — the ones that feel obvious because I know the codebase — are invisible to the model. The model does not know my codebase. It does not know which timeout I meant. It does not know that touching other files is out of scope. It does not know that I want a diff, not a full rewrite. It does not know any of this because I did not write it.

The lesson is not that I should have been more careful. The lesson is that careful prompting requires a framework, not just care. Without a framework, the CONSTRAINTS band stays empty by default because it is the hardest band to fill — it requires thinking about what must not happen, which is cognitively harder than thinking about what should happen.

The sinc format made CONSTRAINTS mandatory. When I fill out a structured prompt, the CONSTRAINTS section stares at me, waiting. I cannot skip it. And every time I fill it in, I am building the walls that keep the Genie inside the corridor I designed — not the vast open space it would otherwise explore.

Eighty thousand tokens taught me that lesson. I have not paid that price since.

Stop leaving the CONSTRAINTS band empty

AI Transform decomposes your raw prompt into all 6 sinc bands automatically — including CONSTRAINTS. Built on a fine-tuned Qwen2.5-7B model at 290 tok/s. Zero API cost.

Try AI Transform