Mario Alexandre  ·  March 26, 2026  ·  sinc-llm structured-prompts prompt-engineering

The 6 Bands Every Prompt Needs (And What Happens Without Them)

I've measured 275 prompt-response pairs and weighted which parts of a prompt actually drive quality. The results surprised me. The part most people spend the most effort on — the TASK description — matters the least. The part almost nobody writes — CONSTRAINTS — matters the most.

Here's the full picture, with exactly what each band does and what happens when it's missing.

sinc-LLM — prompt as a 6-frequency-band signal
x(t) = Σ x(nT) · sinc((t - nT) / T)
n=0
PERSONA
Who the model should be. Sets expertise level, voice, assumptions about your technical depth. Without it: the model picks a default persona that may be far from what you need — too basic, too academic, too cautious.
7.0%
n=1
CONTEXT
What situation you're in. The stack, the codebase, the business problem. Without it: the model answers in the abstract, makes assumptions about your environment that may be completely wrong.
6.3%
n=2
DATA
Relevant facts, numbers, examples. Without it: the model has to infer or make up specifics. Hallucination risk increases sharply when DATA is empty.
3.8%
n=3
CONSTRAINTS
What the model cannot do. Rules, limits, hard requirements. Without it: the model is free to make any tradeoff. It will often make the wrong ones and you'll spend exchanges correcting them. This is where most back-and-forth comes from.
42.7%
n=4
FORMAT
How the output should look. Code diff, bullet list, prose explanation, JSON, step-by-step plan. Without it: the model picks a format based on what it thinks is most helpful. It is often wrong. "I wanted a diff, not a 500-word essay."
26.3%
n=5
TASK
The actual ask. What you want done. Most people write only this. Without the other 5 bands, TASK alone is severely underspecified — the model has everything it needs to be wrong in exactly the right direction.
2.8%

The Counterintuitive Finding

TASK carries 2.8% of quality weight. CONSTRAINTS carries 42.7%. If you had to choose between a perfect task description and a perfect constraints section, write the constraints.

This makes sense once you think about it. The model is good at figuring out what you want — that's what it was trained to do. What it cannot do well is figure out your hidden requirements, your unstated rules, your tradeoffs. "Don't change the database schema" is not implied by any task description. "Keep the response under 200 tokens" is not guessable. "Preserve the existing test coverage" is assumed by some models, ignored by others.

Every constraint you leave out is a potential wrong turn. And wrong turns generate clarification questions, corrections, and do-overs. Those are your wasted tokens.

What Happens When Bands Are Missing

My measurement tracks the correlation between missing bands and exchange rate (back-and-forth per prompt). The data is clear:

Missing CONSTRAINTS: exchange rate increases by ~1.8 extra responses per prompt on average. That's the single biggest driver.
Missing FORMAT: +1.1 extra responses per prompt.
Missing CONTEXT: +0.6 extra responses per prompt.
Missing PERSONA: +0.3 extra responses per prompt.
Missing DATA or TASK: negligible exchange rate impact (model infers or asks directly and quickly).

That adds up. A prompt missing CONSTRAINTS + FORMAT + CONTEXT is burning ~3.5 extra round trips on average. At scale — 21,194 prompts per week — that's $1,588.56 in a week.

The auto-scatter hook fills all 6 bands automatically for every prompt. No extra effort from you. The hook costs $0.002 per call. It saves $0.08 per call in avoided exchanges. That's the 38x ROI.

Try sinc-LLM free — sincllm.com

The 6-band spec is open source. Leave a comment for the hook code.