The single most frustrating AI coding output pattern: tutorial-style code. You ask for a production function and get a class with five methods that all say # TODO: implement this, wrapped in a 300-word explanation of what the code should do. The problem isn't the model's capability — it's the prompt signal. I've built a 6-band coding prompt structure that consistently produces complete, typed, runnable code on the first call.
Every band has a specific job in a coding prompt. PERSONA sets the engineering philosophy. DATA provides the exact environment. CONSTRAINTS eliminate the tutorial-code failure mode. Here's the complete structure for a real-world authentication task:
{
"formula": "x(t) = Σ x(nT) · sinc((t - nT) / T)",
"T": "specification-axis",
"fragments": [
{
"n": 0,
"t": "PERSONA",
"x": "You are a backend engineer who writes minimal, typed, production-ready code. You prefer explicit error handling over silent failures. You never add code you weren't asked to add."
},
{
"n": 1,
"t": "CONTEXT",
"x": "I'm building JWT authentication middleware for an Express.js API. The middleware needs to validate Bearer tokens on protected routes and attach the decoded user payload to req.user."
},
{
"n": 2,
"t": "DATA",
"x": "Stack: Node.js 20, Express 4, jsonwebtoken 9.0. JWT_SECRET is in process.env. Token format: { userId: string, role: 'admin' | 'user', iat: number, exp: number }. Protected routes: /api/v1/* except /api/v1/auth/*"
},
{
"n": 3,
"t": "CONSTRAINTS",
"x": "No placeholder comments (# TODO, // implement). No example usage code unless asked. No explanation prose after the code block. Handle token expiry (401) separately from invalid token (403). TypeScript types, not JSDoc. Never use any as a type. Do not add logging — we have a separate logging middleware."
},
{
"n": 4,
"t": "FORMAT",
"x": "Single TypeScript file. Types at top, middleware function below. Export the middleware as default. No test scaffolding. Max 50 lines."
},
{
"n": 5,
"t": "TASK",
"x": "Write the Express JWT authentication middleware."
}
]
}
PERSONA: Sets the engineering mindset. "Never add code you weren't asked to add" is not obvious — without it, models pad outputs with helper functions, example routes, and test stubs.
CONTEXT: The architectural situation. Where this code fits in the system. What problem it solves. Without this, the model writes standalone code that doesn't fit your actual structure.
DATA: The exact environment. Language version, library versions, env var names, type definitions. This is where hallucination prevention happens — give the model real facts and it uses them.
CONSTRAINTS: The anti-patterns to avoid. This band does the most work for coding prompts: no TODOs, no explanation prose, specific error code handling, no wildcard types. Each constraint eliminates a known failure mode.
FORMAT: Output shape. Single file, no test scaffolding, line count limit. The line count limit is especially effective — it forces the model to be concise rather than comprehensive.
TASK: One imperative sentence. The task is the destination, not the specification.
Coding prompt tip: Put the version numbers in DATA, not CONTEXT. "Node.js 20" in DATA is a factual constraint the model applies to syntax choices (e.g., native fetch, top-level await). "I'm building an Express app" in CONTEXT is situational framing. The distinction matters because models apply DATA as hard constraints and CONTEXT as soft situational awareness.
Write JWT middleware for Express in TypeScript. It should validate tokens and attach the user to req.user. Use jsonwebtoken library.
CONSTRAINTS: No TODOs. No prose after code. Handle expiry (401) vs invalid (403) separately. No any types.
FORMAT: Single TS file. Types at top. Max 50 lines.
DATA: Node 20, Express 4, jsonwebtoken 9.0, JWT_SECRET in env.
The structured prompt gets you code you can drop into production. The raw prompt gets you tutorial code you spend 20 minutes adapting.
Try AI Transform — Structure Your Coding Prompt Free