From Fragile Scripts to Bulletproof Specs
The professional toolkit for reliable LLM engineering. Define, test, and run provider-neutral workflows with a high-performance Rust core, and use them from Python, Node.js, or the CLI with full transparency.
version: "1"metadata:name: lead-qualifiermessages:- role: systemcontent: |Qualify this lead based on their inquiry.Respond with QUALIFIED or NOT_QUALIFIED.- role: usercontent: "I'm interested in your enterprise solution"sampling:temperature: 0.7seed: 42response:format: textstrict_mode: Warn
Tired of Fragile, Untestable AI Code?
Scattered Logic
Prompt strings, API keys, and model parameters are spread all over your codebase.
"Guess-and-Check" Testing
Reliably testing LLM outputs is a nightmare, making CI/CD impossible.
Framework Lock-in
Building with one agentic framework means a total rewrite if you want to switch.
The Solution
“Specado centralizes your entire LLM workflow—prompts, models, and tests—into a single, elegant spec.yaml or spec.json file. It's version-controllable, human-readable, and machine-executable.”
How It Works
A simple, powerful workflow for reliable AI engineering.
1. Define
Craft your entire prompt workflow—messages, model settings, and behavior—in a single, human-readable YAML file.
version: "1"messages:- role: systemcontent: "Qualify this lead..."- role: usercontent: "I'm interested in your enterprise plan."sampling:temperature: 1.5 # Value is intentionally out of rangestrict_mode: Warn
2. Validate
Instantly check your spec for syntax errors and schema violations. Catch mistakes before they become runtime problems.
$ specado validate --spec ./spec.yaml✅ Prompt spec is valid
3. Preview
See the exact, translated payload that will be sent to the provider API, plus a detailed report on any data loss or changes.
$ specado preview --prompt spec.yaml --provider provider.yaml=== Translated Request ==={"model": "gpt-5","messages": [...],"temperature": 1.0}=== Lossiness Report ===[WARN] Clamp: Value 1.5 above maximum 1.0 (sampling.temperature)
4. Execute
Run your spec from the CLI or integrate it into your application using the native Python and Node.js libraries.
import specadofrom specado.types import Prompt, Message# Define and run a prompt programmaticallyprompt = Prompt(messages=[Message(role="system", content="You are a helpful assistant."),Message(role="user", content="What is the capital of France?")])# Run against a provider and model by nameresult = specado.run(prompt,provider="openai",model="gpt-5")print(result.content)
Never Fly Blind: Understand Every API Call
Provider APIs are not created equal. When you define a prompt with a 0.7 temperature and the provider only supports 1.0, what happens? Specado shows you. The lossiness report gives you a clear diff of what was changed, dropped, or clamped during translation.
[WARN] Clamp: Value 1.5 above maximum 1.0 (sampling.temperature)
[WARN] Drop: Parameter not supported by provider (sampling.top_k)
[INFO] Relocate: System message relocated to provider-specific location (messages[0])
Start Simple. Scale Anywhere. Never Get Locked In.
Specado is the perfect starting point. Its provider-neutral format gives you full portability. When your project's needs change, you can inspect the exact provider payload and migrate your logic without vendor lock-in.
Core Benefits
Built for developers who demand reliability, performance, and flexibility.
- ✅Validated & Reliable
- Validate specs against provider capabilities and schemas to catch errors before you run. Ensure your configurations are solid and prevent runtime surprises.
- 🚀Rock-Solid Rust Core
- A high-performance Rust core means speed and memory safety you can rely on, with lightweight native bindings for your favorite language.
- 🐍Write Once, Run Anywhere
- Write a spec once and run it anywhere—from the CLI for quick iteration, or via native Rust-powered bindings in your Python or Node.js applications.
- ✨Intelligent DevEx
- Get rich, location-aware error messages and a detailed lossiness report on every run. Stop guessing what the provider is *actually* receiving.
- 🕊️No Lock-In, Ever
- A provider-neutral spec means you can swap backends from OpenAI to Anthropic or others without rewriting your core logic. You own the prompt, not the provider.