Prompts are code. They should be version-controlled, validated, diffable, and machine-readable. The .sinc.json format provides a standardized JSON schema for LLM prompts based on signal processing theory.
Prompts stored as plain text strings cannot be versioned meaningfully. A git diff between two versions of a 500-word prompt shows the entire string changed, even if you only modified one constraint. There is no way to run automated validation — is the prompt missing a persona? Are the constraints specific enough? Is the format specified?
The .sinc.json format treats prompts as structured data with a defined schema. Each specification dimension is a separate field, enabling targeted diffs, automated validation, band-level testing, and machine-to-machine prompt passing in multi-agent systems.
This format was developed by sinc-LLM based on the Nyquist-Shannon sampling theorem applied to natural language specification:
{
"formula": "x(t) = Σ x(nT) · sinc((t - nT) / T)",
"T": "specification-axis",
"fragments": [
{"n": 0, "t": "PERSONA", "x": "string — who the LLM should be"},
{"n": 1, "t": "CONTEXT", "x": "string — background information"},
{"n": 2, "t": "DATA", "x": "string — specific inputs and examples"},
{"n": 3, "t": "CONSTRAINTS", "x": "string — rules and limitations (longest band)"},
{"n": 4, "t": "FORMAT", "x": "string — expected output structure"},
{"n": 5, "t": "TASK", "x": "string — the action to perform"}
]
}
| Field | Type | Required | Description |
|---|---|---|---|
formula | string | Yes | The sinc reconstruction formula. Always x(t) = Σ x(nT) · sinc((t - nT) / T) |
T | string | Yes | The sampling axis. Always specification-axis |
fragments | array | Yes | Array of exactly 6 fragment objects |
fragments[].n | integer | Yes | Band index: 0-5 |
fragments[].t | string | Yes | Band type: PERSONA, CONTEXT, DATA, CONSTRAINTS, FORMAT, or TASK |
fragments[].x | string | Yes | Band content — the specification text for this dimension |
Version control: When you change a constraint, the git diff shows only the CONSTRAINTS band changed. Not the entire prompt. This makes prompt evolution traceable and reviewable.
Validation: A JSON schema validator can check that all 6 bands are present, that the CONSTRAINTS band exceeds a minimum length threshold, and that no band is empty. You can add this to CI/CD pipelines with a pre-commit hook.
Machine readability: In multi-agent systems, agents need to pass specifications to each other. The sinc JSON format is the inter-agent cognitive contract — each agent reads the bands it needs without parsing natural language.
Diffability: Compare two prompt versions side by side at the band level. See exactly which specification dimension changed between versions.
Composability: Merge bands from different prompts. Use the PERSONA from one prompt, the CONSTRAINTS from another, and the FORMAT from a third. This is trivial with JSON fields and impossible with text paragraphs.
sinc-LLM recommends the .sinc.json file extension for structured prompts. This makes prompts discoverable and enables editor plugins, linters, and CI rules to target prompt files specifically.
# Project structure prompts/ customer-support.sinc.json code-review.sinc.json content-writer.sinc.json data-analysis.sinc.json
Each file is a valid JSON document containing the full sinc schema. Store them alongside your application code and version them with git.
# .github/workflows/validate-prompts.yml
name: Validate sinc prompts
on: [push, pull_request]
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Validate sinc JSON
run: |
for f in prompts/*.sinc.json; do
python3 -c "
import json, sys
with open('$f') as fh:
d = json.load(fh)
assert d.get('formula'), 'Missing formula'
assert d.get('T') == 'specification-axis', 'Invalid T'
assert len(d.get('fragments', [])) == 6, 'Need exactly 6 bands'
bands = {f['t'] for f in d['fragments']}
required = {'PERSONA','CONTEXT','DATA','CONSTRAINTS','FORMAT','TASK'}
assert bands == required, f'Missing bands: {required - bands}'
constraints = next(f for f in d['fragments'] if f['t']=='CONSTRAINTS')
assert len(constraints['x']) >= 100, 'CONSTRAINTS too short'
print(f'PASS: $f')
"
done
This CI pipeline validates that every .sinc.json file in your repository has all 6 bands, a valid formula, and a CONSTRAINTS band of sufficient length. Failed validation blocks the PR merge.
# Python: Load and flatten sinc JSON for any LLM API
import json
def load_sinc(path: str) -> str:
"""Load a .sinc.json file and flatten to a system prompt string."""
with open(path) as f:
sinc = json.load(f)
return "\n\n".join(
f"[{frag['t']}]\n{frag['x']}"
for frag in sorted(sinc["fragments"], key=lambda f: f["n"])
)
# Use with OpenAI
system_prompt = load_sinc("prompts/customer-support.sinc.json")
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_message}
]
)
The sinc-LLM website generates .sinc.json output that you can copy directly into your project. Visit the full specification for detailed schema documentation.
As prompt engineering becomes a professional discipline, teams need a standard format for sharing, reviewing, and versioning prompts. The .sinc.json format provides this standard — it is open, documented, and backed by the signal-theoretic foundation that ensures every prompt captures the full bandwidth of human intent.
The sinc-LLM JSON prompt template generator creates valid .sinc.json files from any raw text input, making it easy to adopt the format without learning the schema manually.