Mario Alexandre  ·  March 26, 2026  ·  sinc-llm structured-prompts nyquist

The Formula I Put on Every Prompt (And Why It Matters)

Every prompt I build with sinc-LLM starts with this:

x(t) = Σ x(nT) · sinc((t - nT) / T)

People ask me: is that decoration? Is it just branding? Is it something the model actually uses?

It's the mathematical contract. And yes, the model uses it — not by solving the equation, but by recognizing the pattern it belongs to and calibrating its interpretation of the rest of the prompt accordingly.

What the Formula Actually Means

This is the Whittaker-Shannon interpolation formula — better known as the Nyquist-Shannon sampling theorem. In signal processing, it says: if you sample a continuous signal at discrete points (the x(nT) terms), you can perfectly reconstruct the original signal using sinc interpolation, as long as you sample at twice the highest frequency in the signal.

The mapping to prompts is this:

x(t)
The full intent of your prompt — what you actually want, including everything implied and unstated
x(nT)
The discrete samples — the 6 sinc bands (PERSONA, CONTEXT, DATA, CONSTRAINTS, FORMAT, TASK)
T
The specification axis — the spacing between information types, the resolution of your specification
sinc()
The interpolation — how the model reconstructs the full intent from the discrete samples
Σ
The sum — all 6 bands together reconstruct the complete specification

The theorem says you can reconstruct a signal perfectly from samples — if you sample at the right frequency. The 6 bands are chosen to sample the "prompt signal" at a frequency that captures everything a language model needs to respond correctly on the first try.

Why It's In the JSON, Not Just in My Head

The formula lives in the JSON because the sinc format is designed to be machine-readable. When one agent passes a prompt to another, or when the scatter hook injects context, or when a sub-agent receives a delegation — the formula in the JSON is a versioned identifier. It says: "this structure follows the sinc-LLM specification."

{
  "formula": "x(t) = Σ x(nT) · sinc((t - nT) / T)",
  "T": "specification-axis",
  "fragments": [
    {"n": 0, "t": "PERSONA", "x": "..."},
    {"n": 1, "t": "CONTEXT", "x": "..."},
    {"n": 2, "t": "DATA", "x": "..."},
    {"n": 3, "t": "CONSTRAINTS", "x": "..."},
    {"n": 4, "t": "FORMAT", "x": "..."},
    {"n": 5, "t": "TASK", "x": "..."}
  ]
}

The formula field is also how the pass-through rule works in the auto-scatter hook. If an incoming prompt contains "formula" as a top-level JSON key, the hook recognizes it as already-sinc and skips re-scattering. The formula is both documentation and detection signal.

The Nyquist Connection Is Real

I spent time verifying that the Nyquist analogy holds up before I published the paper (DOI: 10.5281/zenodo.19152668). Here's the key parallel that made it click for me:

In Nyquist sampling theory, aliasing happens when you sample below the Nyquist rate. Aliasing = signal corruption = wrong reconstruction. In prompting, "aliasing" happens when the model doesn't have enough dimensional information — it fills in the gaps with wrong assumptions. The result is prompt aliasing: the model reconstructs your intent incorrectly because it had too few samples of the full specification space.

The 6 bands are a sampling rate designed to prevent prompt aliasing. CONSTRAINTS and FORMAT together carry 69% of the quality weight because those are the dimensions most commonly undersampled in natural language prompts. The auto-scatter hook samples all 6 bands on every prompt — even short ones, even ambiguous ones — because aliasing happens at any prompt length when bands are missing.

Try sinc-LLM free — sincllm.com

Full specification and DOI paper linked from the site.