Prodis
Prodis Labs, Inc. · Memo № 001 · Rev 2026.03.05 · Doc 001-A · Page 01 / 01

From Prodis Labs, Inc.
To Engineering teams shipping AI over production APIs
Re The backend control layer your agent is missing
Keywords framework-native adapter · typed endpoint contracts · evidence contracts · deterministic derivation · auditability

Stop shipping
confident wrong
answers
.

Prodis turns your Django, FastAPI, or Spring service into a verified AI assistant. Decomposed, validated, deterministic where code owns the step. Compatible with whatever you already ship — OpenAI SDK, Anthropic SDK, LangChain, MCP, or a homegrown loop. No API rewrites.


§ 01

Premise.

The demo passes. Production is where it breaks.

Every production AI eventually fails the same way. A confidently wrong answer reaches a user¹. An action runs that should have been blocked². A decision is made that the team cannot reconstruct after the fact³. The assistant refuses gracefully, or it does not. Polished language is not correctness, and the model is not the authority for that distinction.


§ 02

Method.

The model proposes. The backend executes.

Break question-answering into granular, schema-validated steps, and a non-deterministic task becomes an almost-deterministic one. The model interprets language at the seams; bounded, typed code owns the rest. Synthesis happens only after evidence is sufficient — and the system can refuse.

Fig. 01 — Trace One question, decomposed
"How did her Q3 numbers compare to last quarter?"
01 Ground "her"staff: Alice (stf_4831)
02 Time "Q3"2026-07-01 / 2026-09-30vs. Q2: 2026-04-01 / 2026-06-30
03 Plan GET /staff?firstName=Alice
GET /sales?staffId=stf_4831&start=…&end=…
04 Derive Python: totals, deltas, comparison — deterministic
"Alice's Q3 sales were 12% lower than Q2."
If evidence breaks "her"3 staff matches · answer withheld · asks: "Which staff member exactly?"

§ 03

Stack.

Today's agent stacks run the loop. They do not own correctness.

OpenAI SDK, Anthropic SDK, LangChain, LangGraph, MCP, direct tool calls, in-house loops — useful tools, incomplete control boundary. They call models, route tools, stream responses, and wire workflows. They do not make the backend the authority for answer eligibility over business APIs. In the usual loop, the model still decides whether the evidence is enough. Prodis moves that boundary into code.

Fig. 02 — Stack vs. Runtime Drawn 2026.03.05
Today's Agent Stack
  1. 01user asks
  2. 02model picks tools
  3. 03api returns data
  4. 04model decides if enough
  5. 05model writes answer
answer
Prodis Runtime
  1. 01model interprets intent
  2. 02plan compiled + validated
  3. 03authorized APIs execute
  4. 04result derived in code
  5. 05answer readiness checked
synthesis only when ready
Prodis owns the control plane, not just the final check. Intent, plan, permissions, evidence, derivation, readiness, synthesis: each step has a backend-owned contract.

§ 04

Adapter.

Plug into the framework you already use.

Prodis ships as a framework-native adapter. Install in a Django, FastAPI, or Spring service; the adapter introspects what's already there and produces typed endpoint contracts the runtime validates plans against. Tool calls will keep improving — but they still don't introspect your framework: viewsets, serializers, routes, permissions, DTOs, business API semantics. The adapter does. Your APIs stay where they are. No parallel inventory, no rebuild around an AI framework.

Django viewsets, serializers, permissions
FastAPI route signatures, Pydantic models
Spring controllers, DTOs
30 min

A working assistant in 30 minutes.
Correctness controls from day one.

Install the adapter, keep the model-calling code you already have, and ship a usable assistant without restructuring your service.

Compatible with
OpenAI SDKAnthropic SDKLangChainLangGraphMCPdirect tool callsin-house loops

§ 05

Apparatus.

What runs inside the backend-owned runtime.

The control surface missing from the existing stack. Each capability is owned by code, not by prompts; each leaves an evidence trail the team can audit after the fact.

  1. 01

    Typed endpoint contracts

    Capture request and response semantics in machine-usable contracts the runtime validates plans against.

  2. 02

    Validated planning

    Reject invalid or unsupported plans before execution instead of repairing bad tool calls after the fact.

  3. 03

    Evidence sufficiency

    Make answer readiness explicit so the system knows when evidence is complete and when it is not.

  4. 04

    Deterministic derivation

    Use controlled computation for totals, comparisons, rankings, and grouping over API data instead of prompt-side math or guesswork.

  5. 05

    Step-by-step auditability

    Inspect each step so teams can spot mistakes, review derivation, and debug with confidence.


§ 06

Field.

Already piloted in three industries. Internal workflows and customer-facing commerce. Same hard requirement: the answer has to be right.

These pilots were chosen because correctness is not cosmetic. Operators and customers act on the answer; when evidence is missing, the system must say so. FMCG retail, market research, and fintech share little operationally. They hit the same boundary: useful AI is not enough without backend-owned evidence, permissions, and refusal.

01 FMCG retail chain Ops + commerce
02 Market research provider Internal ops
03 Fintech startup Internal ops

§ 07

RSVP.

Early preview by request.

Engineering teams shipping AI to production, with backend ownership of correctness. Join the private waitlist for early product updates and launch access.

RSVP → waitlist@prodis.ai