AI Transform: Let the Model Decompose Your Prompt, Not a Template

March 25, 2025 · 6 min read · ai-transform prompt-engineering sinc-llm local-llm

Contents

  1. Two buttons, two approaches
  2. What templates can't do
  3. Before/after examples
  4. The six sinc bands
  5. Why CONSTRAINTS is always longest
  6. When to use each
x(t) = Σ x(nT) · sinc((t − nT) / T)
Nyquist-Shannon sampling applied to prompt engineering. AI Transform samples your raw intent at 6 frequency bands, reconstructing it as a complete specification.

Two buttons, two approaches

sincllm.com now has two Transform buttons. "Transform" runs the original client-side template engine — fast, offline-capable, zero latency. "AI Transform" sends your raw prompt to a fine-tuned local model that reads your actual intent and produces sinc JSON tailored to what you're doing.

They produce the same output format. The difference is in how that output is generated — and how good it is. When I first saw the AI output next to the template output side by side, the gap was obvious immediately.

Transform (template)

Runs in the browser. Parses keywords, fills generic band templates. Fast but pattern-matched — the output describes the format, not your intent.

AI Transform (model)

Calls a fine-tuned Qwen2.5-7B model running locally. Reads your full prompt, generates task-specific content in each band. 1-2 seconds, zero marginal cost.

What templates can't do

A template engine works by pattern-matching the input and slotting tokens into predefined positions. "fix the bug" might match a "debugging" template that fills PERSONA with "developer", TASK with "fix the identified bug", and CONSTRAINTS with a generic list of debugging constraints.

That's useful as a starting point. But it can't do three things that matter:

It can't read specificity. "Fix the auth bug before the demo" and "Fix the rendering bug on mobile Safari" both match "debugging". But the CONTEXT, CONSTRAINTS, and TASK bands for these are completely different. A template produces the same output for both.

It can't infer domain. "Optimize the query" could be SQL, MongoDB, GraphQL, or a search engine query. The model picks up on surrounding text and word choice to infer which domain applies and fills the PERSONA and CONTEXT bands appropriately.

It can't weight the CONSTRAINTS band correctly. CONSTRAINTS carries 42.7% of prompt signal quality in the sinc-LLM framework. I realized this is where template engines completely break down — they fill it with boilerplate. The model generates specific, actionable constraints derived from your actual task.

Before/after examples

Example 1: "fix the login bug"

Template output (CONSTRAINTS band):

"x": "Fix the issue correctly. Test thoroughly. Follow best practices.
Do not break existing functionality. Document changes."

AI Transform output (CONSTRAINTS band):

"x": "The login bug must be diagnosed before any fix is written —
read the error trace, identify the exact failure point (auth token
validation, session expiry, cookie handling, or redirect logic).
Fix only the broken component, not the login flow as a whole.
Existing authenticated sessions must not be invalidated by the fix.
The fix must be testable with a single test user without affecting
production sessions. No config changes that require a server restart
unless unavoidable. Regression test: login, logout, and re-login
must all succeed after the fix is applied."

The AI output is 6x longer, task-specific, and operationally grounded. It captures constraints that a developer actually cares about — session preservation, production safety, specific test criteria.

Example 2: "write a blog post about my new feature"

Template output (PERSONA band):

"x": "Content writer with expertise in the relevant domain."

AI Transform output (PERSONA band):

"x": "Technical product writer who understands both the engineering
implementation and the end-user value proposition. Writes in a
direct, specific voice — no fluff, no hedging. Uses exact metrics
and real examples. Treats readers as intelligent adults who want
information, not marketing copy."

The template produces a role. The model produces a writing philosophy and voice specification that actually guides the output.

The six sinc bands

Band Function Quality weight
PERSONA (n=0) Who is executing this. Role, expertise level, voice, approach. ~8%
CONTEXT (n=1) Situational framing. What's happening, why, what system is involved. ~15%
DATA (n=2) Inputs, facts, references. What the executor is working with. ~12%
CONSTRAINTS (n=3) Rules, limits, invariants. What cannot be violated. Always longest. 42.7%
FORMAT (n=4) Output shape. Length, structure, style, delivery format. ~11%
TASK (n=5) The atomic action. One sentence. What exactly is to be done. ~12%

Why CONSTRAINTS is always longest

This is one of the hard invariants in the sinc-LLM framework, and the model has learned to enforce it. CONSTRAINTS carries 42.7% of prompt signal quality because it defines the solution space boundary. A well-specified task with weak constraints produces wildly variable output. A loosely-worded task with tight constraints produces something usable. It hit me that most prompt engineers spend 80% of their effort on the wrong bands.

The intuition from Nyquist-Shannon: CONSTRAINTS is the sampling rate. If your sampling rate is too low, you get aliasing — the output folds into itself and produces artifacts (hallucinations, off-scope responses, wrong format). High-frequency CONSTRAINTS text prevents aliasing.

When the model decomposes "hi" (a 2-character input), it still generates a multi-sentence CONSTRAINTS band — because even a greeting has implicit constraints: respond in kind, don't overwhelm, match the casual register. The band length invariant holds even at minimum input length. I tested this specifically and was surprised it worked.

When to use each

Use the standard Transform when:

Use AI Transform when:

The two buttons coexist permanently. The template engine is the offline fallback. The model is the primary path. Both output the same JSON structure — the downstream consumer doesn't need to know which one ran. I designed it this way so there's never a dead end for the user.

Try AI Transform on your prompt

Paste any raw prompt — 1 word or 1 paragraph — and see the model decompose it into all 6 sinc bands.

Try AI Transform →