In build · Agent orchestration

Many agents. One braid.

Every agent framework wants you to live inside it. Braids is the layer that sits over them. Weave a CrewAI crew, a LangGraph workflow, your Claude Code MCP server, and a local Ollama backstop into one coherent pipeline. The agents don't have to know about each other — Braids does.

In build
The weave

A canvas for stitching agents together.

Drop frameworks onto the canvas — CrewAI here, LangGraph there, an OpenAI Codex cell off to the side, Ollama at the bottom. Wire outputs to inputs. Set fallbacks. The full plan-code-test-push loop, made of pieces from whichever ecosystem does each part best. No more 'we standardized on framework X' — you can use them all.

SF BraidsorchestratorCrewAIcrew · planLangGraphgraph · flowOllamalocal · freeClaude MCPcloud · smartOpenAI CodexfallbackAiderpatches
Agent frameworks woven
Claudeception

Every run makes the next run better.

Every time a braid completes, Braids indexes the transcripts, the diffs, the test results, the failures. Future runs — across any framework — pull from that RAG. Your codebase teaches your agents what works in your codebase. After a month, no off-the-shelf agent setup matches what your braid has internalized about your stack.

RAGyour codebaserun #1run #4run #12run #28run #54run #91every braid run feeds the next one. month-3 you ≠ month-1 you.
Recursive learning loop
Sprint Runner

Plan → Code → Test → Push. Unattended.

The opinionated default braid. Hand it a Linear issue or a feature spec. The architect agent plans it. The coder agent writes it. The tester agent runs it through SF Dynamo. The reviewer agent files the PR. You walk away. You come back to a passing pull request — or to a clear, readable explanation of why it stopped.

PLANarchitectdoneCODEcoderdoneTESTtesterrunningPUSHreviewerqueuedlive log · braid #41[architect] decomposed into 4 stories, 12 tasks[coder] starting src/auth/session.ts[coder] ✓ session.ts · 43 lines · 2 imports[coder] starting src/auth/mfa.ts[coder] ✓ mfa.ts · 87 lines · committed[tester] running suite via SF Dynamo on iPhone 15...[tester] 38 tests passing, 4 still running...
Plan → Code → Test → Push
Local-first by default

Your hardware. Your tokens. Your call.

Ollama is the default execution backend. The architect crew's planner runs on Llama. The coder runs on Qwen. The tester might call out to a hosted Claude for the tricky regression. Braids picks per task — based on what's available, what's free, and what you've configured. Your monthly inference bill goes from $10K to $0 — without giving up the smart calls when you need them.

braid task router · per-step model pickplanqwen2.5-coder:32blocal$0.00codeqwen2.5-coder:14blocal$0.00tricky bugclaude-sonnet-4.6cloud$0.04reviewllama3.1:8blocal$0.00testsqwen2.5-coder:7blocal$0.00commit msgphi4-minilocal$0.00total this sprint$0.04
Local-first model routing
In the Speedforge mesh

Talks to Hopper, Threads, Dynamo, Focus.

Hopper is the launch surface. Threads is the engagement layer. Dynamo is the verifier. Focus is the message bus. Braids is what makes them feel like one product. When SF Threads finishes a plan, it hands the build to Braids. When Braids finishes a feature, it hands the test to Dynamo. When Dynamo greenlights, Hopper pings you. The suite isn't a marketing word — it's how the work moves.

SF HopperlaunchpadSF ThreadsengagementSF BraidsorchestrationSF DynamoverificationSF Focusmission control
Speedforge mesh

Theme song · TBD — pitch one to Sarah