feat: streaming structured output across openai/openrouter/grok/groq + summarize fix#527
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughchat({ outputSchema, stream: true }) now produces an AsyncIterable that emits JSON delta chunks plus a terminal CUSTOM event ChangesStreaming Structured Output (single cohesive change DAG)
Sequence DiagramsequenceDiagram
participant Client
participant ChatRoute as /api/chat
participant ChatFn as chat()
participant Engine as TextEngine
participant Adapter as TextAdapter
participant UI as ChatUI
Client->>ChatRoute: POST structured-stream request
ChatRoute->>ChatFn: chat({ outputSchema, stream: true, ... })
ChatFn->>ChatFn: branch on outputSchema && stream === true
ChatFn->>Engine: convert messages / run agent loop (if tools)
Engine->>Adapter: adapter.structuredOutputStream(request)
loop streaming
Adapter-->>ChatFn: TEXT_MESSAGE_CONTENT chunk
ChatFn-->>ChatRoute: forward chunk via SSE
ChatRoute-->>UI: SSE chunk
UI->>UI: accumulate delta
end
Adapter->>Adapter: parse accumulated JSON & validate schema
Adapter-->>ChatFn: CUSTOM(structured-output.complete { object, raw, reasoning? })
ChatFn-->>ChatRoute: forward CUSTOM event
ChatRoute-->>UI: SSE CUSTOM event
UI->>UI: display structured object
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🚀 Changeset Version Preview14 package(s) bumped directly, 20 bumped as dependents. 🟥 Major bumps
🟨 Minor bumps
🟩 Patch bumps
|
|
View your CI Pipeline Execution ↗ for commit 603cd6b
☁️ Nx Cloud last updated this comment at |
@tanstack/ai
@tanstack/ai-anthropic
@tanstack/ai-client
@tanstack/ai-code-mode
@tanstack/ai-code-mode-skills
@tanstack/ai-devtools-core
@tanstack/ai-elevenlabs
@tanstack/ai-event-client
@tanstack/ai-fal
@tanstack/ai-gemini
@tanstack/ai-grok
@tanstack/ai-groq
@tanstack/ai-isolate-cloudflare
@tanstack/ai-isolate-node
@tanstack/ai-isolate-quickjs
@tanstack/ai-ollama
@tanstack/ai-openai
@tanstack/ai-openrouter
@tanstack/ai-preact
@tanstack/ai-react
@tanstack/ai-react-ui
@tanstack/ai-solid
@tanstack/ai-solid-ui
@tanstack/ai-svelte
@tanstack/ai-utils
@tanstack/ai-vue
@tanstack/ai-vue-ui
@tanstack/openai-base
@tanstack/preact-ai-devtools
@tanstack/react-ai-devtools
@tanstack/solid-ai-devtools
commit: |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
.changeset/streaming-structured-output-openrouter.md (1)
1-6:⚠️ Potential issue | 🟠 Major | ⚡ Quick winMissing changeset entry for
@tanstack/ai.The changeset only bumps
@tanstack/ai-openrouter, but according to the PR summary and the AI-generated diff summary,@tanstack/aialso ships public API additions in this PR:
StructuredOutputCompleteEvent/StructuredOutputStreamtypes added topackages/typescript/ai/src/types.ts- New
outputSchema && stream === truepath inpackages/typescript/ai/src/activities/chat/index.tsBaseTextAdapterbehaviour change (defaultstructuredOutputStreamremoved)Without a corresponding entry,
@tanstack/aiwon't receive a version bump when the changeset is consumed.📦 Suggested addition to the changeset
--- +'@tanstack/ai': minor '@tanstack/ai-openrouter': minor ---As per coding guidelines:
.changeset/**/*.md— create a changeset withpnpm changesetbefore making changes for release management.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In @.changeset/streaming-structured-output-openrouter.md around lines 1 - 6, Add a new changeset markdown entry that also bumps `@tanstack/ai` (in addition to `@tanstack/ai-openrouter`) and documents the public API additions: include the new types StructuredOutputCompleteEvent and StructuredOutputStream (from packages/typescript/ai/src/types.ts), the new outputSchema && stream === true chat path (packages/typescript/ai/src/activities/chat/index.ts), and the BaseTextAdapter behavior change (removal/default change of structuredOutputStream); ensure the changeset message explains these API additions/behavior change so `@tanstack/ai` is versioned when the changeset is released..changeset/streaming-structured-output-chat.md (1)
1-6:⚠️ Potential issue | 🟡 Minor | ⚡ Quick win
@tanstack/ai-openrouteris missing from the changeset packages block.
OpenRouterTextAdapter.structuredOutputStreamis a new public method inpackages/typescript/ai-openrouter— a user-visible feature addition. Without a corresponding entry in the changeset, the package version won't be bumped on release.📦 Suggested fix
--- '@tanstack/ai': minor +'@tanstack/ai-openrouter': minor ---🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In @.changeset/streaming-structured-output-chat.md around lines 1 - 6, The changeset is missing a package entry for the new public method OpenRouterTextAdapter.structuredOutputStream, so add the package name for the ai-openrouter adapter to the changeset packages block and mark it as a minor release (public API addition); update the same changeset that introduced streaming-structured-output-chat to include '@tanstack/ai-openrouter': minor so the package version is bumped on release and consumers get the new structuredOutputStream method.
🧹 Nitpick comments (2)
packages/typescript/ai-openrouter/src/adapters/text.ts (1)
266-530: 🏗️ Heavy lift
structuredOutputStreamduplicates ~150 lines of stream lifecycle boilerplate fromchatStream.The AGUIState initialization (lines 272–291), the per-chunk RUN_STARTED emission + inline error dispatch +
processChoiceloop (lines 324–376), and the catch block structure (lines 490–529) are near-verbatim copies of the equivalent blocks inchatStream. Any future change to stream lifecycle handling (e.g., abort signal threading, new event types, logging conventions) must be applied to both methods independently, increasing divergence risk.Consider extracting a shared
runStreamLoop(stream, aguiState, accumulators, onFinalize)helper that takes the finalization callback as a parameter — the only materially different code between the two methods.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/typescript/ai-openrouter/src/adapters/text.ts` around lines 266 - 530, structuredOutputStream duplicates ~150 lines of stream lifecycle boilerplate from chatStream (AGUIState init, per-chunk RUN_STARTED/error handling, processChoice loop, and catch block); extract this shared logic into a new helper (e.g., runStreamLoop) that accepts the stream iterator, the aguiState, accumulators (accumulatedReasoning, accumulatedContent, toolCallBuffers), and a finalization callback (onFinalize) to run the method-specific wrap-up (JSON parsing, structured-output CUSTOM emission, and RUN_FINISHED); update structuredOutputStream to build its AGUIState and accumulators, call runStreamLoop(stream, aguiState, {accumulatedReasoning, accumulatedContent, toolCallBuffers}, onFinalize) and move processChoice usage, per-chunk yields, and catch handling into the shared helper so both structuredOutputStream and chatStream reuse the same lifecycle code.packages/typescript/ai/src/activities/chat/index.ts (1)
1717-1722: 💤 Low value
mock-ID prefixes leak into real adapter fallback runs.
fallbackStructuredOutputStreamis the production path for any adapter without a nativestructuredOutputStream, but the synthesizedrunId/threadId/messageIduse amock-prefix. That looks like test scaffolding in observability/devtools output and is inconsistent withcreateId('run' | 'thread' | 'msg')used elsewhere in this engine.♻️ Use neutral prefixes
- const runId = chatOptions.runId ?? `mock-${Date.now()}` - const threadId = chatOptions.threadId ?? `mock-${Date.now()}` - const messageId = `mock-${Date.now()}-${Math.random().toString(36).slice(2)}` + const rand = () => Math.random().toString(36).slice(2, 9) + const runId = chatOptions.runId ?? `run-${Date.now()}-${rand()}` + const threadId = chatOptions.threadId ?? `thread-${Date.now()}-${rand()}` + const messageId = `msg-${Date.now()}-${rand()}`🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@packages/typescript/ai/src/activities/chat/index.ts` around lines 1717 - 1722, The current fallback ID generation in chat (variables chatOptions, runId, threadId, messageId) uses "mock-" prefixes which leak into production telemetry; replace the mock-prefixed defaults with the same neutral ID generation used elsewhere (e.g., call the shared createId helper with 'run' | 'thread' | 'msg' or use the existing engine's neutral prefix strategy) so runId/threadId/messageId are generated consistently when chatOptions doesn't supply them; update the fallback logic in the code that reads chatOptions.model/timestamp to use createId for runId and threadId and a non-mock random message id for messageId.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In @.changeset/streaming-structured-output-chat.md:
- Line 5: The changelog line incorrectly states that BaseTextAdapter provides a
default structuredOutputStream; update the description to say BaseTextAdapter
does not implement structuredOutputStream and that the activity layer's
fallbackStructuredOutputStream is the single source of truth for non-streaming
adapters. Reference BaseTextAdapter and structuredOutputStream in
packages/typescript/ai/src/activities/chat/adapter.ts and mention
fallbackStructuredOutputStream in the activity layer as the fallback, and advise
adapter authors to implement structuredOutputStream if they need native
streaming behavior.
In `@packages/typescript/ai-openrouter/src/adapters/text.ts`:
- Around line 378-440: The finalization path for empty non-exception streams
fails to emit RUN_STARTED if no chunks arrived; update the end-of-stream logic
in the generator (the block after the streaming loop that currently emits
RUN_ERROR for empty content) to check hasEmittedRunStarted and, if false, set
hasEmittedRunStarted = true and yield the same RUN_STARTED chunk that the normal
streaming path emits (include runId from aguiState.runId, model resolvedModel,
and timestamp) before yielding RUN_ERROR; this mirrors the catch block behavior
and ensures the AG-UI lifecycle (hasEmittedRunStarted, RUN_STARTED, then
RUN_ERROR) is preserved.
In `@packages/typescript/ai/src/activities/chat/index.ts`:
- Around line 1860-1886: The loop over engine.run() can yield a RUN_ERROR chunk
without throwing, but the code always proceeds to call engine.getMessages() and
adapter.structuredOutputStream(...); fix by detecting an error-yield and
short-circuiting: inside the for-await loop in runStreamingStructuredOutput (the
block that calls engine.run()), when you yield a chunk of type 'RUN_ERROR' set a
local flag (e.g., sawRunError) or reuse earlyTermination set by
handleRunErrorEvent, then after the loop but before calling finalMessages =
engine.getMessages() and adapter.structuredOutputStream(...), check that flag
and return early (or skip the structured-output call) so no structured-output
flow is started after an error. Ensure the check references the same symbols:
engine.run(), handleRunErrorEvent, finalMessages, and
adapter.structuredOutputStream.
In `@testing/e2e/README.md`:
- Line 126: Update the documented prefix list to include the undocumented prefix
used by the abort fixture: add "[structured-stream-abort]" to the "Existing
prefixes" line so it reflects the actual usage in
fixtures/structured-output-stream/abort.json; ensure the exact token
"[structured-stream-abort]" is added alongside "[structured-stream]" to prevent
future collisions.
---
Outside diff comments:
In @.changeset/streaming-structured-output-chat.md:
- Around line 1-6: The changeset is missing a package entry for the new public
method OpenRouterTextAdapter.structuredOutputStream, so add the package name for
the ai-openrouter adapter to the changeset packages block and mark it as a minor
release (public API addition); update the same changeset that introduced
streaming-structured-output-chat to include '@tanstack/ai-openrouter': minor so
the package version is bumped on release and consumers get the new
structuredOutputStream method.
In @.changeset/streaming-structured-output-openrouter.md:
- Around line 1-6: Add a new changeset markdown entry that also bumps
`@tanstack/ai` (in addition to `@tanstack/ai-openrouter`) and documents the public
API additions: include the new types StructuredOutputCompleteEvent and
StructuredOutputStream (from packages/typescript/ai/src/types.ts), the new
outputSchema && stream === true chat path
(packages/typescript/ai/src/activities/chat/index.ts), and the BaseTextAdapter
behavior change (removal/default change of structuredOutputStream); ensure the
changeset message explains these API additions/behavior change so `@tanstack/ai`
is versioned when the changeset is released.
---
Nitpick comments:
In `@packages/typescript/ai-openrouter/src/adapters/text.ts`:
- Around line 266-530: structuredOutputStream duplicates ~150 lines of stream
lifecycle boilerplate from chatStream (AGUIState init, per-chunk
RUN_STARTED/error handling, processChoice loop, and catch block); extract this
shared logic into a new helper (e.g., runStreamLoop) that accepts the stream
iterator, the aguiState, accumulators (accumulatedReasoning, accumulatedContent,
toolCallBuffers), and a finalization callback (onFinalize) to run the
method-specific wrap-up (JSON parsing, structured-output CUSTOM emission, and
RUN_FINISHED); update structuredOutputStream to build its AGUIState and
accumulators, call runStreamLoop(stream, aguiState, {accumulatedReasoning,
accumulatedContent, toolCallBuffers}, onFinalize) and move processChoice usage,
per-chunk yields, and catch handling into the shared helper so both
structuredOutputStream and chatStream reuse the same lifecycle code.
In `@packages/typescript/ai/src/activities/chat/index.ts`:
- Around line 1717-1722: The current fallback ID generation in chat (variables
chatOptions, runId, threadId, messageId) uses "mock-" prefixes which leak into
production telemetry; replace the mock-prefixed defaults with the same neutral
ID generation used elsewhere (e.g., call the shared createId helper with 'run' |
'thread' | 'msg' or use the existing engine's neutral prefix strategy) so
runId/threadId/messageId are generated consistently when chatOptions doesn't
supply them; update the fallback logic in the code that reads
chatOptions.model/timestamp to use createId for runId and threadId and a
non-mock random message id for messageId.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a66ef4a2-3824-4fbc-ad8f-03cfd2e1f307
📒 Files selected for processing (17)
.changeset/streaming-structured-output-chat.md.changeset/streaming-structured-output-openrouter.mdpackages/typescript/ai-openrouter/src/adapters/text.tspackages/typescript/ai-openrouter/tests/openrouter-adapter.test.tspackages/typescript/ai/src/activities/chat/adapter.tspackages/typescript/ai/src/activities/chat/index.tspackages/typescript/ai/src/types.tstesting/e2e/README.mdtesting/e2e/fixtures/structured-output-stream/abort.jsontesting/e2e/fixtures/structured-output-stream/basic.jsontesting/e2e/src/components/ChatUI.tsxtesting/e2e/src/lib/feature-support.tstesting/e2e/src/lib/features.tstesting/e2e/src/lib/types.tstesting/e2e/src/routes/$provider/$feature.tsxtesting/e2e/src/routes/api.chat.tstesting/e2e/tests/structured-output-stream.spec.ts
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@examples/ts-react-chat/src/routes/api.structured-output.ts`:
- Around line 76-80: The non-streaming branch calls chat(...) but doesn't pass
the HTTP request abort signal, so provider work continues after client
disconnects; modify the call to propagate the request's abort signal (e.g., pass
req.signal or a derived AbortSignal) into chat and/or the adapter: update the
chat invocation in the non-streaming path to include signal: req.signal (or
create an AbortController that ties to req.signal and pass that) and ensure
adapterFor(...) or the adapter returned accepts/forwards that signal so provider
requests are cancelled when the client disconnects.
- Around line 30-43: The POST body should be parsed and validated inside a
try/catch and you must validate the provider before calling adapterFor to return
a 400 for bad input; move the request.json() call into the try block, validate
required fields (e.g., provider and optional model types), and if provider is
missing or not one of the expected values return a 400 JSON error instead of
proceeding. Also harden adapterFor by adding a default/else branch (or throw)
when provider is unknown so it cannot return undefined (refer to adapterFor and
its switch cases like 'openai'/'grok'/'groq'/'openrouter'); ensure any casted
model strings are validated or fall back to safe defaults only after input
validation.
In `@packages/typescript/ai-grok/src/adapters/text.ts`:
- Around line 447-450: The logger currently sends the raw SDK error
(logger.errors('grok.structuredOutputStream fatal', { error, ... })) which may
contain sensitive request/auth metadata; replace the raw error with a
sanitized/normalized error object like the one used in
packages/typescript/ai-openai/src/adapters/text.ts (e.g., call the same
sanitize/normalizeOpenAIError helper from that module or replicate its behavior)
and log only the sanitizedError (and minimal context fields) instead of the raw
SDK error so request-level details are not leaked.
In `@packages/typescript/ai-openai/src/adapters/text.ts`:
- Around line 312-424: The loop currently only handles
'response.output_text.delta' and drops 'response.reasoning_text.delta' and
'response.reasoning_summary_text.delta', so add handling for those chunk.type
values: create a separate accumulator (e.g., reasoningAccumulatedContent) and
flags (similar to hasEmittedTextMessageStart) to collect and emit reasoning
deltas via yield asChunk (use types analogous to TEXT_MESSAGE_START/CONTENT or a
clear reasoning event), append deltas when chunk.delta is string|array, and
ensure when the run completes ('response.completed' / final structured-output
emission) you attach the accumulated reasoning to the structured-output
completion payload (structured-output.complete.value.reasoning) so
schema-failure post-mortems receive the model's reasoning; use existing symbols
runId, messageId, model, timestamp, asChunk, accumulatedContent for locating
where to add this logic.
In `@testing/e2e/src/lib/feature-support.ts`:
- Around line 83-85: Update the comment to remove the stale reference to
BaseTextAdapter and instead mention the activity-layer
fallbackStructuredOutputStream as the fallback handler; edit the block that
currently references "BaseTextAdapter implementation" so it reads that other
providers fall back to the activity-layer fallbackStructuredOutputStream (or
similar phrasing) and keep the rest of the comment about providers with native
streaming JSON schema support intact to avoid confusion during matrix
maintenance.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d0a46433-8d08-486d-aaac-9c40b3c58f5d
📒 Files selected for processing (12)
.changeset/streaming-structured-output-grok.md.changeset/streaming-structured-output-groq.md.changeset/streaming-structured-output-openai.mdexamples/ts-react-chat/src/routes/api.structured-output.tsexamples/ts-react-chat/src/routes/generations.structured-output.tsxpackages/typescript/ai-grok/src/adapters/text.tspackages/typescript/ai-grok/tests/grok-adapter.test.tspackages/typescript/ai-groq/src/adapters/text.tspackages/typescript/ai-groq/tests/groq-adapter.test.tspackages/typescript/ai-openai/src/adapters/text.tspackages/typescript/ai-openai/tests/openai-adapter.test.tstesting/e2e/src/lib/feature-support.ts
✅ Files skipped from review due to trivial changes (3)
- .changeset/streaming-structured-output-openai.md
- .changeset/streaming-structured-output-grok.md
- .changeset/streaming-structured-output-groq.md
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@examples/ts-react-chat/src/routes/generations.structured-output.tsx`:
- Around line 281-300: The labels "Provider", "Model" and "Prompt" are not
linked to their form controls; add id attributes to the corresponding <select>
and <textarea> elements (e.g., "provider-select", "model-select",
"prompt-textarea") and set the matching htmlFor on each <label> so the label
text is associated with the control (affects the <select value={provider}
onChange={onProviderChange} disabled={isLoading}>, the <select value={model}
onChange={e => setModel(e.target.value} disabled={isLoading}>, and the prompt
<textarea>), ensuring unique ids and keeping existing props like isLoading
intact.
- Around line 222-229: The handler updates reasoningFull when a streamed chunk
contains a reasoning string (setReasoningFull on chunk.value) but never
recomputes or updates the one-line summary reasoningLine, so the “Thinking”
strip can remain blank or stale; after calling setReasoningFull((chunk.value as
{ reasoning: string }).reasoning) also derive and call setReasoningLine with a
compact one-line version (e.g., trim and collapse whitespace or take the first
sentence) so that both reasoningFull and reasoningLine stay consistent when
reasoning is sent only in the final structured-output.complete or when the final
chunk extends earlier reasoning.
- Around line 170-235: The reader loop currently exits on EOF without failing if
the canonical completion signal ("structured-output.complete") never arrived;
introduce a local boolean flag (e.g. sawFinalResult = false) and set it to true
inside the branch that handles chunk.type === 'CUSTOM' && chunk.name ===
'structured-output.complete' (also still call setHasFinalResult(true)). After
the outer while (after you break on done from reader.read()), check if
(!sawFinalResult) and throw a descriptive Error (e.g. 'Stream ended before
structured-output.complete arrived') so the run fails on EOF when the final
structured-output payload was not observed.
- Around line 163-174: The stream decoding loop uses a TextDecoder with {stream:
true} but never flushes remaining buffered bytes when the reader loop exits;
after the while(true) loop that reads from response.body!.getReader() and
appends decoder.decode(value, {stream: true}) into buffer, call decoder.decode()
once more (without the stream flag) and append its result to buffer (or
otherwise process it) before continuing with accumulated/reasoning/deltas
handling so any partial UTF-8 sequences are correctly finalized.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 49bf45f2-6618-4113-8aea-d644b0809d7c
📒 Files selected for processing (7)
examples/ts-react-chat/src/routes/api.structured-output.tsexamples/ts-react-chat/src/routes/generations.structured-output.tsxpackages/typescript/ai-grok/src/adapters/text.tspackages/typescript/ai-grok/tests/grok-adapter.test.tspackages/typescript/ai-groq/src/adapters/text.tspackages/typescript/ai-groq/tests/groq-adapter.test.tspackages/typescript/ai-openai/src/adapters/text.ts
🚧 Files skipped from review as they are similar to previous changes (3)
- examples/ts-react-chat/src/routes/api.structured-output.ts
- packages/typescript/ai-groq/src/adapters/text.ts
- packages/typescript/ai-groq/tests/groq-adapter.test.ts
There was a problem hiding this comment.
♻️ Duplicate comments (6)
examples/ts-react-chat/src/routes/api.structured-output.ts (2)
97-105:⚠️ Potential issue | 🟠 Major | ⚡ Quick winValidate the POST body before casting it.
request.json()runs outside thetry, and the unchecked cast lets malformed JSON or an unsupportedproviderfall through as a 500. Parse the body inside thetrywith a Zod schema and return 400 on invalid input instead of lettingadapterFor()receive unchecked data.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@examples/ts-react-chat/src/routes/api.structured-output.ts` around lines 97 - 105, Move parsing of the POST body into the try block and validate it with a Zod schema before casting so malformed JSON or invalid provider values return a 400 instead of reaching adapterFor(); specifically, define a Zod schema for { prompt: string, provider?: Provider, model?: string, stream?: boolean }, call await request.json() inside the try, parse/validate with schema.parse or safeParse, and if validation fails return a 400 response; then compute resolvedProvider (fallback to 'openrouter') only from validated data and pass that safe value to adapterFor().
127-132:⚠️ Potential issue | 🟠 Major | ⚡ Quick winPropagate disconnect aborts through the non-streaming
chat()call too.Only the streaming branch cancels provider work when
request.signalaborts. The non-streaming call will keep running after the client is gone, burning tokens and tying up server capacity.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@examples/ts-react-chat/src/routes/api.structured-output.ts` around lines 127 - 132, The non-streaming chat call doesn't receive the client's abort signal, so work continues after the client disconnects; update the chat invocation to pass through the abort signal (e.g., include signal: request.signal) so that chat(...) (and downstream adapterFor/resolvedProvider work) can cancel when the request is aborted; ensure the chat call's options include the signal property alongside adapter, modelOptions, messages, and outputSchema.examples/ts-react-chat/src/routes/generations.structured-output.tsx (4)
279-299:⚠️ Potential issue | 🟠 Major | ⚡ Quick winAssociate the visible labels with their controls.
Provider,Model, andPromptare rendered as standalone labels, so the selects and textarea lose explicit accessible names and label-click focus.Also applies to: 326-334
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@examples/ts-react-chat/src/routes/generations.structured-output.tsx` around lines 279 - 299, The labels "Provider", "Model" and "Prompt" are not associated with their form controls; add explicit associations by giving each select/textarea a unique id (e.g., providerSelect, modelSelect, promptTextarea) and set the corresponding label's htmlFor to that id (or wrap the control inside the label). Update the JSX around the provider select (value={provider}, onChange={onProviderChange}), the model select (value={model}, onChange={setModel}), and the prompt textarea to use those ids so clicking a label focuses the correct control and improves accessibility.
220-227:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winKeep
reasoningLinein sync with terminal reasoning.This branch overwrites
reasoningFullbut leaves the one-line strip stale. If reasoning only arrives onstructured-output.complete, the “Thinking” summary stays blank or outdated.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@examples/ts-react-chat/src/routes/generations.structured-output.tsx` around lines 220 - 227, When you set reasoningFull from the chunk (inside the branch that checks (chunk.value as { reasoning?: string }).reasoning), also update reasoningLine so it stays in sync: assign reasoningFull via setReasoningFull((chunk.value as { reasoning: string }).reasoning) and immediately call setReasoningLine with a one-line/trimmed version (e.g., first line or trimmed slice) of the same value; update the same logic path that handles structured-output.complete so both state variables (reasoningFull and reasoningLine) are always updated from the same source.
168-233:⚠️ Potential issue | 🟠 Major | ⚡ Quick winFail the run if EOF arrives before
structured-output.complete.The loop currently exits successfully on EOF even when only partial deltas were received. If the connection drops before the terminal event, this page leaves a partial object rendered with no error.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@examples/ts-react-chat/src/routes/generations.structured-output.tsx` around lines 168 - 233, The reader loop currently breaks on EOF even if the terminal structured-output.complete was never received; add a local boolean (e.g., receivedFinalResult) initialized false, set it true in the branch where you call setResult(...) and setHasFinalResult(true) for the CUSTOM/structured-output.complete chunk (the same place that currently setsHasFinalResult), and change the EOF handling (when done is true) to throw an Error (or reject) if receivedFinalResult is still false (e.g., throw new Error('Stream ended before structured-output.complete')). This ensures the run fails on premature EOF while preserving existing state updates.
168-233:⚠️ Potential issue | 🟠 Major | ⚡ Quick winFlush the
TextDecoderafter the read loop.
decoder.decode(value, { stream: true })can buffer trailing UTF-8 bytes. Without a finaldecoder.decode(), the last SSE frame can be truncated or dropped when a multibyte character spans reads.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@examples/ts-react-chat/src/routes/generations.structured-output.tsx` around lines 168 - 233, The TextDecoder may have buffered trailing UTF-8 bytes; after the read loop that uses decoder.decode(value, { stream: true }) (the while (true) { const { done, value } = await reader.read() ... } block) call decoder.decode() with no arguments to flush remaining bytes and append the result to buffer before continuing to parse frames, then run the same frame-parsing logic on the updated buffer; update references to buffer, decoder, and the reader loop/stream parsing code so the final partial multibyte characters are not dropped.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Duplicate comments:
In `@examples/ts-react-chat/src/routes/api.structured-output.ts`:
- Around line 97-105: Move parsing of the POST body into the try block and
validate it with a Zod schema before casting so malformed JSON or invalid
provider values return a 400 instead of reaching adapterFor(); specifically,
define a Zod schema for { prompt: string, provider?: Provider, model?: string,
stream?: boolean }, call await request.json() inside the try, parse/validate
with schema.parse or safeParse, and if validation fails return a 400 response;
then compute resolvedProvider (fallback to 'openrouter') only from validated
data and pass that safe value to adapterFor().
- Around line 127-132: The non-streaming chat call doesn't receive the client's
abort signal, so work continues after the client disconnects; update the chat
invocation to pass through the abort signal (e.g., include signal:
request.signal) so that chat(...) (and downstream adapterFor/resolvedProvider
work) can cancel when the request is aborted; ensure the chat call's options
include the signal property alongside adapter, modelOptions, messages, and
outputSchema.
In `@examples/ts-react-chat/src/routes/generations.structured-output.tsx`:
- Around line 279-299: The labels "Provider", "Model" and "Prompt" are not
associated with their form controls; add explicit associations by giving each
select/textarea a unique id (e.g., providerSelect, modelSelect, promptTextarea)
and set the corresponding label's htmlFor to that id (or wrap the control inside
the label). Update the JSX around the provider select (value={provider},
onChange={onProviderChange}), the model select (value={model},
onChange={setModel}), and the prompt textarea to use those ids so clicking a
label focuses the correct control and improves accessibility.
- Around line 220-227: When you set reasoningFull from the chunk (inside the
branch that checks (chunk.value as { reasoning?: string }).reasoning), also
update reasoningLine so it stays in sync: assign reasoningFull via
setReasoningFull((chunk.value as { reasoning: string }).reasoning) and
immediately call setReasoningLine with a one-line/trimmed version (e.g., first
line or trimmed slice) of the same value; update the same logic path that
handles structured-output.complete so both state variables (reasoningFull and
reasoningLine) are always updated from the same source.
- Around line 168-233: The reader loop currently breaks on EOF even if the
terminal structured-output.complete was never received; add a local boolean
(e.g., receivedFinalResult) initialized false, set it true in the branch where
you call setResult(...) and setHasFinalResult(true) for the
CUSTOM/structured-output.complete chunk (the same place that currently
setsHasFinalResult), and change the EOF handling (when done is true) to throw an
Error (or reject) if receivedFinalResult is still false (e.g., throw new
Error('Stream ended before structured-output.complete')). This ensures the run
fails on premature EOF while preserving existing state updates.
- Around line 168-233: The TextDecoder may have buffered trailing UTF-8 bytes;
after the read loop that uses decoder.decode(value, { stream: true }) (the while
(true) { const { done, value } = await reader.read() ... } block) call
decoder.decode() with no arguments to flush remaining bytes and append the
result to buffer before continuing to parse frames, then run the same
frame-parsing logic on the updated buffer; update references to buffer, decoder,
and the reader loop/stream parsing code so the final partial multibyte
characters are not dropped.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5087f1ff-61cd-4360-9d0e-51d3117b3ba0
📒 Files selected for processing (2)
examples/ts-react-chat/src/routes/api.structured-output.tsexamples/ts-react-chat/src/routes/generations.structured-output.tsx
|
Structured streaming in action Screen.Recording.2026-05-05.at.5.26.00.pm.mov |
c3b9f03 to
25bb94e
Compare
|
Hi! I'm the It looks like you correctly set up a CI job that uses the autofix.ci GitHub Action, but the autofix.ci GitHub App has not been installed for this repository. This means that autofix.ci unfortunately does not have the permissions to fix this pull request. If you are the repository owner, please install the app and then restart the CI workflow! 😃 |
84a07ab to
2dc520b
Compare
aeb4d6e to
7aff8b1
Compare
bb52f59 to
557b860
Compare
…mple wiring
Packages
- ai-openai: add openaiChatCompletions / OpenAIChatCompletionsTextAdapter
sibling to the existing Responses adapter. Thin subclass of
OpenAIBaseChatCompletionsTextAdapter so callers can pick the older
/v1/chat/completions wire format against the OpenAI SDK.
- ai: ChatStreamSummarizeAdapter.summarizeStream now accumulates summary
text and emits a terminal CUSTOM { name: 'generation:result' } event
before passing RUN_FINISHED through. Fixes useSummarize never populating
result in connection/server-fn streaming modes — GenerationClient only
sets result on that specific CUSTOM event.
ts-react-chat example
- Structured Output menu: drop the misleading '(OpenRouter)' suffix from
the sidebar entry; relabel the OpenAI option as 'OpenAI (Responses)';
add 'OpenAI (Chat Completions)' and 'OpenRouter (Responses beta)' so
the page exposes all four wire-format combinations end-to-end.
- Summarize page: add a model picker (gpt-4o-mini through gpt-5.2) wired
through to the API route and both server-fns. Drop the hard-coded
maxLength: 200 which on Responses-API reasoning models gets the whole
max_output_tokens budget consumed by hidden reasoning; the style
instruction in the prompt already drives length. Live-render
TEXT_MESSAGE_CONTENT deltas via onChunk so streaming mode is visibly
streaming rather than appearing identical to direct.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…structured-output streams Address PR #527 review feedback from @AlemTuzlak (3 comments on responses-text.ts, 1 on text.ts, 2 on activities/chat/index.ts): - ai-openrouter/responses-text.ts (`structuredOutputStream`): - Drop `(await this.orClient.beta.responses.send(...)) as AsyncIterable<StreamEvents>` — `EventStream<T>` already extends `AsyncIterable<T>`. - Drop `as ResponsesRequest['text']` on the inner `text` object — the SDK's request type accepts the literal shape directly. - Drop inline `(chunk as { ... }).delta` / `(chunk.response ?? {}) as {...}` casts. `NormalizedStreamEvent` already types `delta` and `response`; the existing `processStreamChunks` reads the same fields without casts. - Drop redundant `satisfies StreamChunk` (20×). The `AsyncIterable<StreamChunk>` / `Generator<StreamChunk>` return types already validate every yield site via contextual typing. - ai-openrouter/text.ts (`structuredOutputStream`): - Drop `(await this.orClient.chat.send(...)) as AsyncIterable<ChatStreamChunk>`. - Drop redundant `satisfies StreamChunk` (17×). - ai/activities/chat/index.ts: - Replace `{ chatOptions: TextOptions<any, any>; outputSchema: any }` parameter on `fallbackStructuredOutputStream` with `StructuredOutputOptions<Record<string, unknown>>` — the adapter-side type already exists. - Drop `(adapter as { provider?: string }).provider ?? adapter.name` in the structured-stream logger. `provider` is not a `TextAdapter` field; `adapter.name` is the canonical provider identifier. - Drop redundant `satisfies StreamChunk` / `satisfies StructuredOutputCompleteEvent` (8×) in `fallbackStructuredOutputStream` and `runStreamingStructuredOutputImpl`. - ai/tests/chat-result-types.test.ts (new): - Add type-only regression test for `TextActivityResult`. Pins each `(outputSchema?, stream?)` combination so #526's streaming-structured-output branch can't silently regress to a Promise (or vice versa).
Use narrowed locals (`outputSchema`, `stream`) and explicit `outputSchema: undefined` overrides instead of double-casting `options` through `unknown`. The trailing `as TextActivityResult<TSchema, TStream>` stays — TS narrows value types from runtime guards but not generic type parameters, so the conditional return type can't be reduced from inside a branch. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
6988031 to
d5daa73
Compare
Cover chat({ outputSchema, stream: true }) in docs/chat/structured-outputs.md
and the ai-core/structured-outputs skill: StructuredOutputStream<T> return
type, isStructuredOutputCompleteEvent example, structured-output.complete
event shape, per-adapter coverage (native vs. fallback), and a HIGH common
mistake against parsing partial JSON deltas. Adds a cross-ref from the skill
to ai-core/chat-experience.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…nks in structuredOutputStream Public `StructuredOutputStream<T>` is now a discriminated union over three tagged CUSTOM variants: `structured-output.complete<T>`, `approval-requested`, and `tool-input-available`. Each has a literal `name` and typed `value`, so `chunk.type === 'CUSTOM' && chunk.name === '<literal>'` narrows directly to the exact shape — no `isStructuredOutputCompleteEvent` helper or cast needed. The bare CustomEvent is excluded from the union (its `value: any` would collapse the narrow to `any`); user-emitted events via the `emitCustomEvent` API still flow at runtime as a documented residual gap. New exports from @tanstack/ai: `ApprovalRequestedEvent`, `ToolInputAvailableEvent`. The `isStructuredOutputCompleteEvent` helper is removed (this overload is new in this PR — no shipped consumers). Per-chunk `logger.provider(...)` debug logging added inside `structuredOutputStream` for the four affected adapters (openai-base chat-completions + responses, ai-openrouter text + responses-text), matching the existing pattern in `chatStream` for end-to-end introspection in debug mode. ai-openrouter uses `finishReason` (camelCase) consistent with the SDK and the sibling chatStream logger; openai-base uses `finish_reason` per the openai SDK shape. Docs (`docs/chat/structured-outputs.md`) and the AI-core `structured-outputs` SKILL.md updated to use the direct discriminated narrow.
Merge the 10 changesets covering this PR (streaming structured output across chat/grok/groq/openai/openai-base/openrouter, the openrouter decoupling + narrowing, and the summarize subsystem unification) into a single `.changeset/streaming-structured-output.md` with the union of version bumps. The body retains every meaningful section from the originals (core, openai-base, provider adapters, openrouter decoupling, summarize) and adds the tagged-CustomEvent type design from the previous commit.
…lesson Initial scaffold of `.agent/self-learning/` for the self-improve plugin (INDEX.md, config.yml, curation-state.yml, coupling.json, .gitignore, `lessons/promoted/`). Captures the first repo-scoped lesson: `2026-05-14-build-before-running-examples.md` — run `pnpm -w run build:all` before starting any example dev server so the workspace packages have `dist/` outputs vite can resolve.
…s adapter docs/chat/structured-outputs.md Add "Streaming with tools that may pause" subsection covering the approval-requested / tool-input-available tagged variants the agent loop can emit before structured-output.complete. Code example shows the narrowing pattern for all three CUSTOM variants. Cross-links the Tool Approval Flow and Client Tools pages. docs/adapters/openai.md Add "Chat Completions API" section after Basic Usage covering the new openaiChatCompletions / createOpenaiChatCompletions factories — when to pick Chat Completions vs. Responses (reasoning-summary streaming, wire-format compatibility), code example, and a link to the Structured Outputs page for the streaming case. API Reference at the bottom now includes both factories.
…anual iteration to advanced
The previous streaming section opened with \`for await (const chunk of stream)\`
— that's the advanced/server-side-only path. The typical use case is a UI
streaming JSON deltas through SSE from a server endpoint, and the docs should
lead with it.
- New "Server endpoint" subsection: \`chat({outputSchema, stream: true})\` +
\`toServerSentEventsResponse(stream)\`. One short example, no ceremony.
- New "Client with useChat" subsection: \`useChat\` + \`fetchServerSentEvents\`
+ \`onChunk\`, with \`parsePartialJSON\` driving progressive UI. Shows where
the validated object lives (the terminal \`structured-output.complete\` event,
typed as \`T\` via the schema). Notes Vue/Solid/Svelte share the shape.
- "What the stream contains" + "Adapter coverage" tables retained verbatim.
- Old standalone \`for await\` example moved to a new "Advanced: iterating the
stream directly" subsection at the end, framed as the path for Node scripts,
CLIs, server-only flows, and tests.
- "Streaming with tools that may pause" reframed to use the \`onChunk\` signature
(matching the new primary path); a note points back to the advanced section
for callers iterating the stream directly.
…treaming
Pass the same schema you give chat() on the server to useChat() on the
client, and the hook tracks the progressive object and the validated
terminal payload for you — no external useState, no onChunk ceremony, no
parsePartialJSON calls in user code.
API:
const { sendMessage, isLoading, partial, final } = useChat({
connection: fetchServerSentEvents("/api/extract"),
outputSchema: PersonSchema,
})
// partial: DeepPartial<Person> — updates per TEXT_MESSAGE_CONTENT delta
// final: Person | null — snaps on structured-output.complete
Implementation:
- New generic param TSchema extends SchemaInput | undefined = undefined on
UseChatOptions / UseChatReturn / useChat.
- UseChatReturn is conditional on TSchema: when supplied, adds typed
partial/final; when undefined (default), return is unchanged. Inferred
automatically from outputSchema option.
- Internal onChunk handler tracks raw JSON buffer via ref, runs
parsePartialJSON on each TEXT_MESSAGE_CONTENT delta, snaps final on the
terminal CUSTOM structured-output.complete event, resets all three on
RUN_STARTED. User's own onChunk callback still fires after internal
processing — both compose.
- DeepPartial<T> exported for handlers that need to annotate.
The schema is used purely for client-side type inference; server-side
validation still runs against the schema passed to chat({ outputSchema })
on the server route. Works identically for non-streaming endpoints — for
those, partial stays {} and final populates when the single terminal
event arrives.
Type-level tests (tests/use-chat-types.test.ts) pin both branches of the
discriminated return type — useChat() without outputSchema rejects access
to partial/final via @ts-expect-error, useChat() with outputSchema asserts
typed DeepPartial<Person> / Person | null.
…l/final
Apply the same schema-driven structured-output API that landed in
@tanstack/ai-react to the other three framework hooks. Same options shape
(`outputSchema?: TSchema`), same discriminated return type, identical
runtime behavior — only the reactivity primitive differs per framework.
Reactivity primitives:
Vue — `Readonly<ShallowRef<DeepPartial<T>>>` / `Readonly<ShallowRef<T | null>>`
Solid — `Accessor<DeepPartial<T>>` / `Accessor<T | null>`
Svelte — `readonly partial: DeepPartial<T>` / `readonly final: T | null`
(rune-backed getters)
Each hook is now generic on `TSchema extends SchemaInput | undefined`,
inferred from the `outputSchema` option. When omitted (default), the
return type is byte-identical to before; when supplied, `partial`/`final`
are added via a conditional `UseChatReturn<TTools, TSchema>` /
`CreateChatReturn<TTools, TSchema>`. The internal onChunk handler is the
same in all four — RUN_STARTED resets, TEXT_MESSAGE_CONTENT accumulates +
parses, CUSTOM structured-output.complete snaps final. User onChunk is
still invoked after the internal pass.
DeepPartial<T> is exported from each framework package.
Type-level tests in each package pin both branches of the discriminated
return type, mirroring the React variant — pure types, no renderer
required. Existing test suites pass on all three packages:
ai-vue: 93 tests pass
ai-solid: 103 tests pass
ai-svelte: 56 tests pass
…alls - structured-outputs.md "Client with useChat" section: add a "Rendering reasoning and tool calls" subsection explaining that those land on messages[…].parts (ThinkingPart, ToolCallPart, ToolResultPart) just like normal chat — no separate hook fields. Includes a render snippet showing how to hide the raw-JSON TextPart and let the structured view (partial/final) replace it. - Note that useChat (React/Vue/Solid) and createChat (Svelte) all accept the same outputSchema option with the same semantics — only the reactivity primitive differs. - Changeset: bump @tanstack/ai-vue, @tanstack/ai-solid, @tanstack/ai-svelte to minor alongside @tanstack/ai-react. Replaced the "React" section with a unified "Framework hooks" section covering all four packages and documenting the per-framework reactivity types.
… APIs
The previous draft of the streaming-with-tools-that-may-pause subsection
invented showApprovalPrompt / runClientTool / resumeWithToolResult
helpers. The actual flow uses the standard chat APIs, identical to a
non-structured chat:
- Server tools with needsApproval:true land on messages[...].parts as
ToolCallPart with state === 'approval-requested'. Render approval UI
from messages, respond via addToolApprovalResponse({ id, approved })
from the hook return (see docs/tools/tool-approval).
- Client tools with execute() set run automatically via the
ChatClient's onToolCall handler (chat-client.ts:198-233). For manual
handling, use addToolResult({ toolCallId, tool, output, state }) —
see docs/tools/client-tools.
Replaced the made-up code with a real example showing an approval-
gated tool inside a structured-output run, using addToolApprovalResponse
and rendering the prompt from messages.parts. The structured stream
layers on top of standard chat — no special pause-handling logic.
…utput orchestrator
Two runtime test files closing the highest-value gaps in the PR's
test coverage:
packages/typescript/ai-react/tests/use-chat-structured-output.test.ts (4 tests)
- partial updates progressively from TEXT_MESSAGE_CONTENT deltas, final
snaps on the terminal CUSTOM structured-output.complete event
- state resets between runs via the stateful mock adapter (RUN_STARTED
clears partial/final before the second run's deltas land)
- user-supplied onChunk callback fires after internal tracking, with
full visibility of the same chunks
- useChat() without outputSchema doesn't track structured state — the
internal handler's outputSchema-gate is a no-op
packages/typescript/ai/tests/chat-structured-output-stream.test.ts (6 tests)
- native adapter.structuredOutputStream path: validated structured-
output.complete event forwarded with parsed object, schema validation
failure → RUN_ERROR { code: 'schema-validation' } and NO complete
event is emitted, reasoning carries through validation onto the
terminal event, TEXT_MESSAGE_CONTENT deltas pass through
- fallbackStructuredOutputStream path (adapter lacks native streaming):
synthesizes RUN_STARTED → TEXT_MESSAGE_* → structured-output.complete
→ RUN_FINISHED around the non-streaming structuredOutput call;
schema validation failure on the fallback path also emits RUN_ERROR
Together: ai package 769 tests, ai-react 110, ai-vue 93, ai-solid 103,
ai-svelte 56 — all green.
The server-side adapter implementations of structuredOutputStream (shared
by ai-openai, ai-grok, ai-groq via inheritance) had zero unit coverage —
only the e2e suite exercised them. Two new focused test files close that
gap by stubbing the openai SDK client and verifying the AG-UI lifecycle,
request shape, error paths, and per-chunk debug logging.
tests/chat-completions-structured-output-stream.test.ts (6 tests)
- happy path: RUN_STARTED → TEXT_MESSAGE_* → CUSTOM
structured-output.complete (typed object + raw JSON) → RUN_FINISHED
- request shape: stream: true + response_format: { type: 'json_schema',
json_schema: { strict: true } }; tools are stripped
- delta accumulation across multiple chunks produces exactly one
structured-output.complete with the fully-parsed object
- empty content → RUN_ERROR { code: 'empty-response' }, no
structured-output.complete is emitted
- malformed JSON → RUN_ERROR { code: 'parse-error' }
- per-chunk logger.provider is called once per SDK chunk (verified via
a spy logger threaded through resolveDebugOption)
tests/responses-structured-output-stream.test.ts (7 tests)
- same matrix against the Responses API event shape
(response.created / response.output_text.delta / response.completed)
- request shape: stream: true + text.format: { type: 'json_schema',
strict: true }; tools stripped
- usage promoted from response.completed onto RUN_FINISHED
- empty content / parse-error → RUN_ERROR with the correct code
- response.refusal.delta → RUN_ERROR { code: 'refusal' } (Responses-
only failure surface)
- per-chunk logger.provider invocation
Stub adapters extend the base directly and pass a fake OpenAI client
whose chat.completions.create / responses.create routes into a per-test
mock — same pattern as the existing chat-completions-text.test.ts and
responses-text.test.ts suites.
openai-base test count: 70 → 83 (all passing). Types + lint clean.
Two failures from the previous push, both stemming from the type-test files I added: knip flagged @standard-schema/spec as an unlisted import across ai-react, ai-vue, ai-solid, and ai-svelte test files, and @tanstack/ai-vue:test:types failed because — unlike the other three — its tsconfig included tests/, so tsc strictly resolved the import (which isn't a direct dep, only transitively via @tanstack/ai). Fixes: - Add \`@standard-schema/spec: ^1.1.0\` to devDependencies on all four framework packages. The import is purely for type-level construction in the type tests (StandardJSONSchemaV1<Person, Person> — a phantom branded type that simulates what a Zod schema's inferred type would look like). devDep is the right scope. - Align ai-vue's tsconfig with ai-react/ai-solid/ai-svelte by dropping tests/ from the tsc include block. Tests are still type-checked by vitest at runtime; tsc now only checks src/. Verified locally: pnpm test:knip, pnpm test:sherif, and test:types on all four framework packages pass.
🎯 Changes
Started as openrouter-only (#526) and grew into a multi-package effort: typed streaming structured output across the four OpenAI-compatible providers, a Chat Completions sibling for OpenAI, a fix for streaming summarize, the decoupling of
@tanstack/ai-openrouterfrom the shared OpenAI base, and a hand-testable example to exercise the whole surface.Core —
@tanstack/aichat({ outputSchema, stream: true })overload returningStructuredOutputStream<InferSchemaType<TSchema>>. Yields raw JSON deltas plus a terminalCUSTOM { name: 'structured-output.complete', value: { object, raw, reasoning? } }event.StructuredOutputStream<T>is a discriminated union over three taggedCUSTOMvariants —structured-output.complete<T>,approval-requested, andtool-input-available(newApprovalRequestedEvent/ToolInputAvailableEventinterfaces exported from@tanstack/ai). Consumers narrow with a plainchunk.type === 'CUSTOM' && chunk.name === '<literal>'and get a fully-typedchunk.value— no helper or cast required. The bareCustomEvent(value: any) is excluded from the union on purpose to keep the narrow from collapsing toany; user-emitted events via theemitCustomEventcontext API still flow at runtime as a documented residual gap.finishReason), typedRUN_ERRORon empty content, mid-stream provider errors terminate cleanly, schema-validation failures carryrunId / model / timestamp.fallbackStructuredOutputStreamin the activity layer is the single source of truth for adapters that don't implementstructuredOutputStreamnatively;BaseTextAdapterno longer ships a default.ChatStreamSummarizeAdapter.summarizeStreamaccumulates summary text and emits a terminalCUSTOM { name: 'generation:result' }event before the finalRUN_FINISHED. FixesuseSummarizenever populatingresultover streaming connections (the client only setsresulton that specific CUSTOM event).Base —
@tanstack/openai-base@tanstack/ai-openai-compatible; the old name remains published for pinned lockfiles but receives no further updates.structuredOutputStreamonOpenAIBaseChatCompletionsTextAdapter(usesresponse_format: { type: 'json_schema', strict: true }+stream: true) andOpenAIBaseResponsesTextAdapter(usestext.format: { type: 'json_schema', strict: true }+stream: true). Both call the OpenAI SDK directly post-Migrate ai-groq, ai-openrouter, ai-ollama to openai-base + parameterize the base for SDK shape variance #543's "adopt openai SDK" refactor.isAbortError) so subclasses can map provider-specific abort errors tocode: 'aborted'without owning the rest of the finalisation path.logger.provider(...)debug logging now fires insidestructuredOutputStreamloops (mirroring the existingchatStreampattern) so debug mode gives end-to-end chunk introspection for the structured-output path.Provider adapters
@tanstack/ai-openaiopenaiTextresponse.reasoning_text.delta+response.reasoning_summary_text.delta(requiresreasoning.summary: 'auto')@tanstack/ai-openaiopenaiChatCompletions(new)reasoning.summaryopt-in@tanstack/ai-grokgrokTextdelta.reasoning_content(DeepSeek convention; not typed by OpenAI SDK)@tanstack/ai-groqgroqTextdelta.reasoning(requiresreasoning_format: 'parsed'; not typed by groq-sdk)@tanstack/ai-openrouteropenRouterTextdelta.reasoningDetails(camelCase)@tanstack/ai-openrouteropenRouterResponsesText(beta)response.reasoning_text.delta+response.reasoning_summary_text.deltavianormalizeStreamEventAll six emit the contractual
REASONING_*lifecycle (REASONING_START→REASONING_MESSAGE_START→REASONING_MESSAGE_CONTENTdeltas →REASONING_MESSAGE_END→REASONING_END) and close it beforeTEXT_MESSAGE_START. Accumulated reasoning is also surfaced onstructured-output.complete.value.reasoningfor consumers that only subscribe to the terminal event. OpenRouter SDK's proprietaryRequestAbortedErroris mapped (alongside DOMAbortError) tocode: 'aborted'in the two openrouter adapters.@tanstack/ai-openaialso exports a newOpenAIChatCompletionsTextAdapter/openaiChatCompletions/createOpenaiChatCompletionsfactory — a sibling to the existing Responses adapter for callers who want the older/v1/chat/completionswire format against the OpenAI SDK.@tanstack/ai-openrouteris now decoupled from the shared OpenAI base and reads@openrouter/sdk's camelCase types natively (no more snake_case ↔ camelCase round-trips). It extendsBaseTextAdapterdirectly;@tanstack/openai-baseandopenaiare removed from its deps. Public OpenRouter API is unchanged.ts-react-chatexample/generations/structured-outputis now a 6-way demo of the entire feature surface:Streamtoggle: off uses non-streamingchat({ outputSchema }), on usesstructuredOutputStream.parsePartialJSON— title, summary, recommendation cards (brand → name → type → price → reason), and next steps fill in field-by-field as JSON streams in, snapping to the validated payload on the terminal event.REASONING_MESSAGE_CONTENTdeltas.modelOptions(gpt-5.x/o-series →reasoning.summary: 'auto', groq gpt-oss/qwen3/kimi-k2 →reasoning_format: 'parsed', openrouter →reasoning.effort: 'medium')./generations/summarizeoverhauled too:onChunkhandler that accumulatesTEXT_MESSAGE_CONTENTdeltas into local state so the summary renders token-by-token, with a "streaming…" indicator next to the heading.maxLength: 200. On Responses-API reasoning modelsmaxLengthmaps tomax_output_tokenswhich covers reasoning + visible output combined; a 200-token cap consumed the whole budget on hidden reasoning, returning truncated responses with no summary. Theconcise/bullet-points/paragraphprompt instruction is enough to drive length.E2E
testing/e2e/src/lib/feature-support.ts—structured-output-streamset expanded from['openrouter']to all four providers. The parameterised spec intesting/e2e/tests/structured-output-stream.spec.ts(happy path + abort) now runs across each.✅ Checklist
pnpm run test:pr.Verified during development:
pnpm test:lib,pnpm test:types,pnpm test:eslint,pnpm test:build,pnpm test:knipacross affected packages. Hand-tested inexamples/ts-react-chatagainst each provider for both streaming and non-streaming. E2E (pnpm --filter @tanstack/ai-e2e test:e2e -- --grep structured-output-stream) needs to run on a host where port 4010 is free.🚀 Release Impact
One consolidated changeset (
.changeset/streaming-structured-output.md) covers the union of version bumps: minor on@tanstack/ai,@tanstack/openai-base,@tanstack/ai-openai,@tanstack/ai-grok,@tanstack/ai-groq,@tanstack/ai-openrouter; patch on@tanstack/ai-anthropic,@tanstack/ai-gemini,@tanstack/ai-ollama. The body retains the per-area sections from the originals (core, openai-base, provider adapters, openrouter decoupling, summarize subsystem) and adds the tagged-CustomEvent type design.