There's an old thought experiment about genie wishes. You make a wish, the genie grants it — exactly and literally. "I wish for a million dollars" and it appears on your doorstep, triggering an IRS investigation. "I wish for world peace" and humanity is instantly dead, which is technically peaceful. The genie gives you what you ask for, not what you meant.
Language models work the same way. They're extraordinarily literal. They give you exactly what your prompt asked for — and your prompt, in natural language, is full of ambiguity that gets resolved in ways you didn't intend.
Once I internalized this, my prompt quality changed immediately.
When I ask Claude "make the login faster", what does that actually mean? Faster how? Cache the session? Optimize the DB query? Remove validation steps? Return a 200 before the async work completes? All of these make login "faster". The genie picks one.
If I'm lucky, it picks the one I wanted. If I'm not — and I'm not, roughly 60% of the time based on my measurement — it picks a different one and I spend an exchange correcting it.
The fix is to make wishes that can only be interpreted one way. Not by being verbose — by being specific about the dimensions that matter.
The second version isn't longer because I'm being verbose. It's longer because I've specified: the metric (P95), the mechanism (query optimization), and three explicit constraints (no validation changes, no schema, no cache). The genie now has almost no room to go wrong.
In genie stories, the hero eventually learns to add safeguards. "I wish for a million dollars, obtained legally, without harm to anyone, without triggering any legal investigation, deposited in my existing bank account." Each safeguard closes one hole the genie could exploit.
In sinc-LLM, that's the CONSTRAINTS band. It carries 42.7% of prompt quality weight in my measurements — more than PERSONA, CONTEXT, DATA, and TASK combined. The reason is exactly the genie analogy: constraints are how you close the holes in your specification.
Every constraint closes one direction the model might go wrong. "No schema changes" closes the migration direction. "Tests must pass" closes the refactor-everything direction. "Response under 50 lines" closes the verbose-explanation direction. The more constraints you write, the smaller the space of valid responses, and the more likely the model picks the one you actually want.
Another classic genie failure: you wish for gold and get a gold statue of yourself. You got gold, just not in the shape you wanted. FORMAT is how you specify the shape of the output.
"Code diff" means a before/after diff, not an explanation of what to change. "Bullet list" means bullets, not flowing prose. "JSON schema" means a schema, not an example of JSON that matches it. If you don't specify FORMAT, the genie picks one — usually prose, usually too long, usually not what you actually needed for your next step.
FORMAT carries 26.3% of quality weight in my measurements. The second biggest driver after CONSTRAINTS. Together they're 69% of what makes a prompt response good or bad.
Writing perfect genie wishes every time is hard. In practice, when I'm in flow, I type "fix the login bug" and hit enter. I'm not thinking about CONSTRAINTS and FORMAT specifications — I'm thinking about the problem.
The auto-scatter hook takes my imperfect wish and fills in the safeguards automatically. It infers the most likely constraints from context, fills in the most likely format from the type of task, and adds persona and context from the project environment. It's like having an interpreter between me and the genie who cleans up my wishes before they're granted.
The result: my exchange rate dropped from 4.2 to 1.6 responses per prompt. The hook costs $0.002 per call. It saves $0.08 per call. 38x ROI. Open source. Leave a comment and I'll share the link.
Try sinc-LLM free — sincllm.com
The sinc spec teaches you to make better wishes.