Q&A
Common questions, honest answers.
Most of these come up at the moment the fun starts to slip — somewhere between sharing your useful prototype with a colleague and shipping it to your whole company. Bring more on GitHub Discussions; push back on the answers.
When the fun ends
I have a useful agent and a colleague who loves it. What changes when I try to roll it out company-wide?
The same things every other app in your org needs land on you at once: security review, cost control, compliance, governance, secrets, multi-tenant isolation, per-customer configs, deploy, observability. Each of those is a quarter of work on its own — and then there's the management of all of them. KDCube exists to absorb that work so prototype-mode keeps going. Read the full story →
When does KDCube stop being overkill?
When you ship the second AI surface — or the second customer with different constraints. One agent, one widget, one cron job: optional. Two of them sharing tenancy, budgets, and audit: KDCube pays off. See in comparison →
Why not just build the productization layer ourselves?
You can. Six to twelve months of focused engineering for the governance layer alone — gateway, wallets, sandbox, audit, deploy. KDCube exists because we did exactly that, several times, before extracting it. If you'd rather start from a runtime and modify what doesn't fit, fork KDCube — that's what MIT is for. See the full feature matrix →
Is KDCube production-ready, or still experimental?
It runs in real production deployments today — gateway, processor, isolated execution, ReAct v2, economics. A few subsystems (managed multi-region, hosted control plane, advanced compliance forwarders) are still maturing on the roadmap.
Do I have to use ReAct v2, or can I bring my own agent runtime?
Bring your own. LangGraph, CrewAI, plain Python — anything that runs inside a bundle. ReAct v2 is the included loop, not the required one; it's there because we needed it ourselves. KDCube vs. LangGraph →
What's actually in a "bundle"?
Backend logic, REST APIs, agents, embedded widgets, full UIs, MCP tools, scheduled @cron jobs, secrets, and storage — all packaged as one application unit. Hot-reloadable; deploy without restarting workers. Bundle docs →
How do I support different configs per customer without forking the bundle?
Bundles run against a (tenant, project, user) context resolved at admission. Per-customer configs and secrets live in the deployment descriptor, not the code. Storage paths, budgets, audit logs, and streaming channels inherit that scope automatically.
Hosting & data
Is KDCube self-hosted only, or is there a managed cloud?
Self-hosted today — Docker Compose, Kubernetes, or ECS/Fargate via Terraform. A managed KDCube.cloud service is on the roadmap; it'll run the same open-source runtime, not a fork.
Where does my customer data live? Does anything leave my VPC?
Stays in your VPC. Postgres, Redis, S3 — all yours. The only outbound traffic is what you choose to send to LLM providers and external tools, with your own keys. vs. Bedrock (AWS-managed) →
Cost & licensing
KDCube is MIT-licensed — what's the catch?
There isn't one. MIT, free, fork it, ship it in a commercial product if you want. The only thing we ask is that you keep the license file.
What does it actually cost to run?
Infrastructure-only — Postgres, Redis, your own LLM provider keys. No license fee, no per-seat pricing, no usage tier. A laptop is enough to develop; production cost depends on your workload.
Company & commitment
Who's behind KDCube?
A small team with more than ten years of building data and apps platforms across customers. We've watched the fun-ends moment hit, several times, on different platforms. KDCube is the runtime we kept rebuilding underneath them, finally extracted into one place. By choice we operate in shadow mode — we'd rather the deliverable speak for itself than the personalities behind it.
Are you a startup looking for users?
Yes — and we say that openly. We want users, contributors, and feedback from people who hit the same fun-ends moment we did. Nothing is gated behind a paid tier; the open-source runtime is the product.
What if you pivot or stop maintaining it?
The code is MIT-licensed and source-available — it's yours regardless. We're invested in this for the long arc, but a community that owns the platform doesn't depend on us being around.
Versus the alternatives
How does KDCube compare to LangGraph, CrewAI, or AutoGen?
Different shape. Those are agent-loop libraries — you decide how to host them, how to isolate tenants, how to bill, how to deploy. KDCube is the runtime around the loop: tenancy, budgets, audit, deploy. Run LangGraph or CrewAI inside a bundle and KDCube handles the rest. vs. LangGraph → · vs. CrewAI →
What about Bedrock Agents — isn't that already a managed runtime?
Yes, on AWS, with Bedrock as the substrate and limited surface choices. KDCube runs on your infra (Compose, Kubernetes, ECS/Fargate), is MIT-licensed, exposes its internals, and lets you bring your own LLM provider keys. Use Bedrock Agents if you want the AWS-managed answer; use KDCube if you want a runtime you can reason about and rehost. vs. Bedrock AgentCore →
Dify and Flowise are also self-hostable AI platforms — how is KDCube different?
Closer in shape than the others, but those lean no-code / low-code-first and are organised around prompt flows. KDCube is code-first (a Python SDK) and treats tenancy, budgets, and isolated execution as platform primitives, not workspace settings. If you want flow-builder UX, look at them; if you want a runtime your engineers will live in, look here. See LLM-app-platform tab →
Contribute & community
How do I contribute, ask questions, or report bugs?
GitHub Issues for bugs and feature requests. GitHub Discussions for everything else — design questions, ideas, "is this the right approach", and general Q&A. Pull requests welcome.