TrueSpec
Our Mission

Give AI the context
it actually needs.

AI-powered IDEs like Cursor, Copilot, and Windsurf have changed how we write code. But they share a common weakness: they're only as reliable as the context they receive.

Ask an AI to "add OAuth" and you'll get something that looks right but uses deprecated APIs, missing environment variables, and skipped edge cases. You spend the next three hours debugging what was supposed to take ten minutes.

TrueSpec fixes this. We build verified specs — complete solutions with working code, dependency lists, environment configs, edge cases, and built-in validation instructions. When you use a TrueSpec spec as context in your AI IDE, the agent has a proven reference to work from — and the tests to verify its own output.

The result: production-ready code on the first try. No hallucinated imports. No missing configs. No five rounds of "that's not what I meant."

The real problem: proven specs stay private.

Some developers have already figured this out. They wrap the AI inside structured workflows — step-by-step execution paths with hardcoded constraints and validation gates. The AI's output isn't trusted on faith. It's tested. If it fails, it retries. If it passes, it's proven.

These developers are getting reliable, production-grade results from AI. Not because they found better models — because they wrote better specs.

But every team that solves this problem solves it alone. A startup builds a verified workflow for generating API clients from OpenAPI schemas. A solo developer builds one for scaffolding database migrations. A platform team builds one for converting design tokens to component libraries. Each one took days or weeks to develop, test, and validate. Each one works reliably. And each one lives in a private repository, invisible to every other developer who will face the same problem next week.

There is no shared infrastructure for verified AI coding specs. No standard format. No way to publish a proven spec so that others can find it, audit it, and trust it.

That is what TrueSpec exists to build.

What We Believe

AI output should be verified, not trusted.

Trust is for humans. Software output is tested. Every spec on TrueSpec includes validation instructions — deterministic tests that the AI's output must pass. The agent writes the tests, runs them, and confirms the result. If it hasn't been verified, it doesn't deploy.

Specs are more valuable than prompts.

A prompt is a single instruction to a model. A spec is a complete, structured, reusable solution — with a defined execution path, working code, hardcoded constraints, and built-in validation. Prompts produce variable results. Specs produce consistent ones.

The best specs should be shared, not hoarded.

A verified spec that lives in a private repo helps one team. The same spec published to an open hub helps every team that encounters that problem. The developers who have figured out reliable AI coding have an opportunity to make that knowledge public.

Rigor and openness are not in tension.

Open-source does not mean unverified. Community-driven does not mean anything goes. TrueSpec is open and strict. Every spec must pass its validation suite before it is published. The hub is open to contribute to and uncompromising about what it accepts.

What We Are Building

TrueSpec is an open-source community hub where developers publish, discover, and use verified AI coding specs.

A spec on TrueSpec is a complete solution — a structured set of instructions, working code, constraints, and validation gates that an AI agent consumes to execute a coding task correctly. Each spec defines its execution path, its code scaffolding, its constraints, and its tests. When a spec is published to the hub, it means every test has passed. The content is verified.

We are not building an execution engine. We are not building another AI coding assistant. We are building the hub — the shared, version-controlled, verification-backed collection of proven specs that the entire AI coding ecosystem can draw from.

Who This Is For

If you have built a workflow that reliably gets an AI agent to produce correct code for a specific task — you are the person we built this for. Publish your spec. Let others verify it. Let the community improve it.

If you are tired of unreliable AI output and want to use workflows that have already been proven — you are also the person we built this for. Find a verified spec. Use it as context for your AI IDE. Trust the result — because the result has been tested.

Explore the Hub →

TrueSpec. Verified specs for AI that codes.