Mario Alexandre  ·  March 26, 2026  ·  sinc-llm prompt-engineering structured-prompts

Constraints Carry 42% of Prompt Quality — Here's Why

I ran measurements across 275 prompt-response pairs, weighted quality by how much each part of the prompt drove good first-try responses, and got a result I didn't expect.

42.7%
of prompt quality weight comes from CONSTRAINTS alone

CONSTRAINTS — the band that tells the model what it cannot do — outweighs PERSONA, CONTEXT, DATA, and TASK combined. It's twice the weight of FORMAT (26.3%). And TASK, the band everyone obsesses over, carries just 2.8%.

Let me explain why this makes sense, and what to do about it.

The Model Is Good at Inferring Goals

Language models are trained on billions of examples of humans asking for things and getting responses. They're excellent at figuring out what you want. "Fix the bug" — the model knows you want the bug fixed. "Write a function that sorts a list" — the model knows you want a sorting function. The TASK, stated reasonably clearly, is something the model handles well.

What the model cannot do is infer your constraints. It doesn't know you've already tried the obvious solution. It doesn't know you can't modify the database. It doesn't know you need to stay under 100 lines. It doesn't know the tests must pass. It doesn't know you're targeting Python 3.9 specifically.

Every constraint you leave unstated is a dimension along which the model might go wrong — and won't know it went wrong until you tell it, at the cost of another exchange.

sinc-LLM — n=3 CONSTRAINTS is the highest-weight band
x(t) = Σ x(nT) · sinc((t - nT) / T)

Constraints Collapse the Solution Space

Here's the mental model I use. Before you specify any constraints, the model has a near-infinite solution space. Any valid answer to your task is on the table. The model picks one — and it may not be the one you wanted.

Each constraint collapses the solution space. "No DB schema changes" eliminates half the space. "Must pass existing tests" eliminates another chunk. "Python 3.9 compatible" eliminates another. "Response under 50 lines" eliminates another. By the time you've written 5-6 good constraints, the remaining solution space is small enough that the model almost certainly picks something you'll be happy with.

Without constraints, you're hoping the model happens to pick your preferred solution from a huge space. With constraints, you're guiding it to the small corner of the space where your preferred solution lives.

What Good CONSTRAINTS Look Like

// CONSTRAINTS for a backend fix task
"CONSTRAINTS": "Must not change database schema or existing API contracts.
Existing test suite must pass without modification.
Fix must be backward compatible with Python 3.9.
No new external dependencies.
Minimal footprint — touch only files directly related to the bug.
If the fix requires architectural changes, flag this and propose
the minimal patch instead. Total added lines must be under 40."

That's 7 constraints. Each one eliminates a wrong direction. The model reading this has very little room to produce something unexpected — which is exactly what you want when you're fixing a production bug.

Compare to the typical CONSTRAINTS section in an unstructured prompt: nonexistent. The model gets "fix the webhook validation bug" and produces a solution that helpfully refactors 3 files, upgrades a dependency, and adds a new migration. All of which you have to undo and ask it to try again.

The Rule in My System

In the sinc-LLM specification I published, there's a hard rule: "n=3 CONSTRAINTS must be the longest band." Not longest by a little — explicitly the longest. If your CONSTRAINTS section is shorter than your TASK or CONTEXT section, you probably haven't thought through the constraints enough.

This rule forced me to think differently about what constraints are. They're not just "don't do X". They're your entire set of unstated requirements that the model has no way of knowing. Everything you'd catch in a code review. Everything you've learned from previous similar tasks. Everything specific to your codebase, your team's standards, your deployment environment.

Writing it all down upfront is more work than typing a short prompt. But it's far less work than 4.2 clarification rounds.

Try sinc-LLM free — sincllm.com

The auto-scatter hook fills CONSTRAINTS automatically. Leave a comment for the code.