Option D — Stream AI chat
Option D — Stream AI chat
A single endpoint that gives you a streaming AI conversation backed by your pod's full data and skill library.
What it is
POST /api/external/chat/stream hands your query to the pod's AI system and streams the response back as Server-Sent Events. The AI has access to everything in your pod: entities, documents, conversation memory, and the full skill library. It can search your data, create or update entities, invoke skills, and return structured results — all in response to a plain English question.
Unlike skill invocation, you do not need to know what operation to call. The AI figures that out. The trade-off: the output is less predictable, and latency is higher because the AI reasons before it responds.
Get an API key
Go to Settings → API Keys → New Key, and select the scope chat.stream.
List available channels
Response:
If you omit channelId when streaming, the pod automatically routes to your personal AI channel (creating it if it does not exist yet).
Stream a conversation
The response is a stream of newline-delimited SSE frames. Each frame is prefixed with data: :
SSE frame format
Every frame has a type field. Parse and handle each type separately.
content
Streamed text chunk. Concatenate these to build the full response.
step
The AI used a tool against your pod. Useful for showing "thinking" state in a UI, or for auditing what data the AI accessed.
proposal
The AI wants to make a change (create/update/delete an entity, invoke a skill with write effects) but does not have auto-approve permission for this operation. You need to approve it before it executes.
Approve it with:
complete
The stream is done. Contains the channel and message IDs for follow-up queries or linking back to the Synap app.
error
Something went wrong. The stream ends after this frame.
TypeScript example
Persistent history
When you omit channelId, the pod uses your personal AI channel — one permanent channel per user per workspace. Every message you send this way is saved to that channel, visible inside the Synap app, and included in the AI's context window on the next call.
This means you can:
- Ask a follow-up without re-sending context: the AI already knows what you discussed
- See the full conversation history in the Synap app under the Chat tab
- Have the AI's memory compound over time (the pod runs session compaction automatically)
If you want an isolated conversation that does not affect your main channel history, create a new ai_thread channel first and pass its ID.
Passing a specific channel
The AI will load that channel's history as context. History is bounded by HISTORY_TOKEN_BUDGET (12,000 tokens); older messages are compacted automatically.
Agent types
By default the pod uses the meta agent — the full orchestrator with access to all tools. You can request a specific agent personality with agentType:
Unknown agentType values fall back to the default orchestrator and log a warning — they do not error.
Full request schema
Using with Claude Code
Add this as a tool Claude Code can call to query your pod mid-task:
In your CLAUDE.md:
Next steps
- Option C — Skill invocation — deterministic, no natural language
- Option E — SDK and direct API — full typed access for custom pipelines
- API Keys — manage scopes
