Skip to content

feat: streaming structured output across openai/openrouter/grok/groq + summarize fix#527

Merged
AlemTuzlak merged 33 commits into
mainfrom
526-featai-openrouter-support-streaming-structured-output-response_format-json_schema-with-stream-true
May 14, 2026
Merged

feat: streaming structured output across openai/openrouter/grok/groq + summarize fix#527
AlemTuzlak merged 33 commits into
mainfrom
526-featai-openrouter-support-streaming-structured-output-response_format-json_schema-with-stream-true

Conversation

@tombeckenham
Copy link
Copy Markdown
Contributor

@tombeckenham tombeckenham commented May 5, 2026

🎯 Changes

Started as openrouter-only (#526) and grew into a multi-package effort: typed streaming structured output across the four OpenAI-compatible providers, a Chat Completions sibling for OpenAI, a fix for streaming summarize, the decoupling of @tanstack/ai-openrouter from the shared OpenAI base, and a hand-testable example to exercise the whole surface.

Core — @tanstack/ai

  • New chat({ outputSchema, stream: true }) overload returning StructuredOutputStream<InferSchemaType<TSchema>>. Yields raw JSON deltas plus a terminal CUSTOM { name: 'structured-output.complete', value: { object, raw, reasoning? } } event.
  • StructuredOutputStream<T> is a discriminated union over three tagged CUSTOM variants — structured-output.complete<T>, approval-requested, and tool-input-available (new ApprovalRequestedEvent / ToolInputAvailableEvent interfaces exported from @tanstack/ai). Consumers narrow with a plain chunk.type === 'CUSTOM' && chunk.name === '<literal>' and get a fully-typed chunk.value — no helper or cast required. The bare CustomEvent (value: any) is excluded from the union on purpose to keep the narrow from collapsing to any; user-emitted events via the emitCustomEvent context API still flow at runtime as a documented residual gap.
  • Activity-layer hardening: always-finalise after the stream loop (no silent hangs on missing finishReason), typed RUN_ERROR on empty content, mid-stream provider errors terminate cleanly, schema-validation failures carry runId / model / timestamp.
  • fallbackStructuredOutputStream in the activity layer is the single source of truth for adapters that don't implement structuredOutputStream natively; BaseTextAdapter no longer ships a default.
  • ChatStreamSummarizeAdapter.summarizeStream accumulates summary text and emits a terminal CUSTOM { name: 'generation:result' } event before the final RUN_FINISHED. Fixes useSummarize never populating result over streaming connections (the client only sets result on that specific CUSTOM event).

Base — @tanstack/openai-base

  • Renamed from @tanstack/ai-openai-compatible; the old name remains published for pinned lockfiles but receives no further updates.
  • Centralised structuredOutputStream on OpenAIBaseChatCompletionsTextAdapter (uses response_format: { type: 'json_schema', strict: true } + stream: true) and OpenAIBaseResponsesTextAdapter (uses text.format: { type: 'json_schema', strict: true } + stream: true). Both call the OpenAI SDK directly post-Migrate ai-groq, ai-openrouter, ai-ollama to openai-base + parameterize the base for SDK shape variance #543's "adopt openai SDK" refactor.
  • Cross-SDK abort detection hook (isAbortError) so subclasses can map provider-specific abort errors to code: 'aborted' without owning the rest of the finalisation path.
  • Per-chunk logger.provider(...) debug logging now fires inside structuredOutputStream loops (mirroring the existing chatStream pattern) so debug mode gives end-to-end chunk introspection for the structured-output path.

Provider adapters

Adapter API Reasoning surface
@tanstack/ai-openai openaiText Responses response.reasoning_text.delta + response.reasoning_summary_text.delta (requires reasoning.summary: 'auto')
@tanstack/ai-openai openaiChatCompletions (new) Chat Completions reasoning emitted silently — Chat Completions has no reasoning.summary opt-in
@tanstack/ai-grok grokText Chat Completions delta.reasoning_content (DeepSeek convention; not typed by OpenAI SDK)
@tanstack/ai-groq groqText Chat Completions delta.reasoning (requires reasoning_format: 'parsed'; not typed by groq-sdk)
@tanstack/ai-openrouter openRouterText Chat Completions delta.reasoningDetails (camelCase)
@tanstack/ai-openrouter openRouterResponsesText (beta) Responses (beta) response.reasoning_text.delta + response.reasoning_summary_text.delta via normalizeStreamEvent

All six emit the contractual REASONING_* lifecycle (REASONING_STARTREASONING_MESSAGE_STARTREASONING_MESSAGE_CONTENT deltas → REASONING_MESSAGE_ENDREASONING_END) and close it before TEXT_MESSAGE_START. Accumulated reasoning is also surfaced on structured-output.complete.value.reasoning for consumers that only subscribe to the terminal event. OpenRouter SDK's proprietary RequestAbortedError is mapped (alongside DOM AbortError) to code: 'aborted' in the two openrouter adapters.

@tanstack/ai-openai also exports a new OpenAIChatCompletionsTextAdapter / openaiChatCompletions / createOpenaiChatCompletions factory — a sibling to the existing Responses adapter for callers who want the older /v1/chat/completions wire format against the OpenAI SDK.

@tanstack/ai-openrouter is now decoupled from the shared OpenAI base and reads @openrouter/sdk's camelCase types natively (no more snake_case ↔ camelCase round-trips). It extends BaseTextAdapter directly; @tanstack/openai-base and openai are removed from its deps. Public OpenRouter API is unchanged.

ts-react-chat example

/generations/structured-output is now a 6-way demo of the entire feature surface:

  • Provider/API matrix: OpenAI (Responses), OpenAI (Chat Completions), Grok, Groq, OpenRouter (Chat Completions), OpenRouter (Responses beta) — each with its own model dropdown.
  • Stream toggle: off uses non-streaming chat({ outputSchema }), on uses structuredOutputStream.
  • Progressive UI rendering via parsePartialJSON — title, summary, recommendation cards (brand → name → type → price → reason), and next steps fill in field-by-field as JSON streams in, snapping to the validated payload on the terminal event.
  • Live "Thinking" strip rendering the latest reasoning sentence from REASONING_MESSAGE_CONTENT deltas.
  • Per-provider reasoning opt-ins via modelOptions (gpt-5.x/o-series → reasoning.summary: 'auto', groq gpt-oss/qwen3/kimi-k2 → reasoning_format: 'parsed', openrouter → reasoning.effort: 'medium').
  • Sidebar label cleaned up: "Structured Output (OpenRouter)" → "Structured Output" (it now spans every provider).

/generations/summarize overhauled too:

  • New model picker (gpt-4o-mini through gpt-5.2) plumbed through both API route and server-fn paths.
  • Streaming and Server-Fn modes now visibly stream — wired an onChunk handler that accumulates TEXT_MESSAGE_CONTENT deltas into local state so the summary renders token-by-token, with a "streaming…" indicator next to the heading.
  • Dropped the hard-coded maxLength: 200. On Responses-API reasoning models maxLength maps to max_output_tokens which covers reasoning + visible output combined; a 200-token cap consumed the whole budget on hidden reasoning, returning truncated responses with no summary. The concise / bullet-points / paragraph prompt instruction is enough to drive length.

E2E

testing/e2e/src/lib/feature-support.tsstructured-output-stream set expanded from ['openrouter'] to all four providers. The parameterised spec in testing/e2e/tests/structured-output-stream.spec.ts (happy path + abort) now runs across each.

✅ Checklist

  • I have followed the steps in the Contributing guide.
  • I have tested this code locally with pnpm run test:pr.

Verified during development: pnpm test:lib, pnpm test:types, pnpm test:eslint, pnpm test:build, pnpm test:knip across affected packages. Hand-tested in examples/ts-react-chat against each provider for both streaming and non-streaming. E2E (pnpm --filter @tanstack/ai-e2e test:e2e -- --grep structured-output-stream) needs to run on a host where port 4010 is free.

🚀 Release Impact

  • This change affects published code, and I have generated a changeset.
  • This change is docs/CI/dev-only (no release).

One consolidated changeset (.changeset/streaming-structured-output.md) covers the union of version bumps: minor on @tanstack/ai, @tanstack/openai-base, @tanstack/ai-openai, @tanstack/ai-grok, @tanstack/ai-groq, @tanstack/ai-openrouter; patch on @tanstack/ai-anthropic, @tanstack/ai-gemini, @tanstack/ai-ollama. The body retains the per-area sections from the originals (core, openai-base, provider adapters, openrouter decoupling, summarize subsystem) and adds the tagged-CustomEvent type design.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 5, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

chat({ outputSchema, stream: true }) now produces an AsyncIterable that emits JSON delta chunks plus a terminal CUSTOM event structured-output.complete with { object, raw, reasoning? }. Adapters may implement structuredOutputStream; BaseTextAdapter provides a fallback wrapping non-streaming structured output. Type and runtime plumbing added to route/validate streams.

Changes

Streaming Structured Output (single cohesive change DAG)

Layer / File(s) Summary
Type Definitions
packages/typescript/ai/src/types.ts
Adds StructuredOutputCompleteEvent<T>, StructuredOutputStream<T>, and isStructuredOutputCompleteEvent() to represent/identify the terminal structured-output completion event.
Adapter Interface
packages/typescript/ai/src/activities/chat/adapter.ts
Adds optional structuredOutputStream(options) to TextAdapter, documenting expected AG-UI lifecycle emissions and the final CUSTOM('structured-output.complete') payload.
Chat Types
packages/typescript/ai/src/activities/chat/index.ts (type layer)
Changes TStream defaults and TextActivityResult so streaming structured output is selected only when stream === true via [TStream] extends [true].
Chat Runtime
packages/typescript/ai/src/activities/chat/index.ts (runtime layer)
Adds explicit outputSchema && stream === true branch, implements runStreamingStructuredOutput/runStreamingStructuredOutputImpl, wires abortController into adapter requests, and provides fallbackStructuredOutputStream that synthesizes lifecycle events from non-streaming adapters.
Adapter Implementations
packages/typescript/ai-openai/..., packages/typescript/ai-grok/..., packages/typescript/ai-groq/..., packages/typescript/ai-openrouter/...
Each adapter adds structuredOutputStream async generator making a single stream: true JSON-schema request (strict), strips tools, emits AG-UI lifecycle events and TEXT_MESSAGE_CONTENT deltas, accumulates raw JSON, parses/transforms (null→undefined), emits CUSTOM('structured-output.complete') with { object, raw, reasoning? }, and emits RUN_FINISHED or RUN_ERROR for parse/empty/abort/fatal cases. OpenRouter also adjusts lifecycle finalization flags.
Adapter Unit Tests
packages/typescript/ai-*/tests/*
Adds suites asserting single-request params (json_schema/strict), tools stripped, AG-UI event ordering, accumulation/parsing into CUSTOM, parse/empty/abort/failure handling, reasoning propagation, and null→undefined transform behavior.
Core & Adapter Changesets
.changeset/streaming-structured-output-*.md
Adds changeset entries documenting streaming structured-output behavior across core and provider adapters.
E2E: Features & Fixtures
testing/e2e/src/lib/*, testing/e2e/src/lib/types.ts, testing/e2e/fixtures/structured-output-stream/*.json
Adds 'structured-output-stream' feature, provider matrix entries, feature config, fixtures basic.json and abort.json, and README updates for fixture prefixes.
E2E: UI & Routes
testing/e2e/src/components/ChatUI.tsx, testing/e2e/src/routes/$provider/$feature.tsx, testing/e2e/src/routes/api.chat.ts
Exposes hidden test attributes (data-structured-output, data-count), wires onCustomEvent/onChunk to capture structured-output.complete and delta counts, and conditions /api/chat to call chat() with outputSchema + stream: true for the feature.
E2E Tests
testing/e2e/tests/structured-output-stream.spec.ts
Adds Playwright tests verifying streamed JSON delta accumulation, final structured-output.complete payload delivery, multiple deltas (>1), and mid-stream abort preventing completion.
Examples: Structured Output UI
examples/ts-react-chat/src/routes/*.ts*
Adds multi-provider adapter selection, streaming SSE handling that accumulates TEXT_MESSAGE_CONTENT deltas and reasoning, progressive parse/update of partial JSON, delta counting and abort control, and UI updates for streaming state and partial/result rendering.

Sequence Diagram

sequenceDiagram
    participant Client
    participant ChatRoute as /api/chat
    participant ChatFn as chat()
    participant Engine as TextEngine
    participant Adapter as TextAdapter
    participant UI as ChatUI

    Client->>ChatRoute: POST structured-stream request
    ChatRoute->>ChatFn: chat({ outputSchema, stream: true, ... })
    ChatFn->>ChatFn: branch on outputSchema && stream === true
    ChatFn->>Engine: convert messages / run agent loop (if tools)
    Engine->>Adapter: adapter.structuredOutputStream(request)
    loop streaming
        Adapter-->>ChatFn: TEXT_MESSAGE_CONTENT chunk
        ChatFn-->>ChatRoute: forward chunk via SSE
        ChatRoute-->>UI: SSE chunk
        UI->>UI: accumulate delta
    end
    Adapter->>Adapter: parse accumulated JSON & validate schema
    Adapter-->>ChatFn: CUSTOM(structured-output.complete { object, raw, reasoning? })
    ChatFn-->>ChatRoute: forward CUSTOM event
    ChatRoute-->>UI: SSE CUSTOM event
    UI->>UI: display structured object
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

"I nibble deltas one by one, so neat,
Streaming JSON rhythms, tiny feet,
Parsing, validating, then complete—hooray! 🐰
A structured carrot arrives today,
I hop, I stash the final JSON treat."

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 63.64% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat: streaming structured output (chat outputSchema + stream:true)' clearly and concisely summarizes the main feature addition: streaming structured output with chat when outputSchema and stream are both true.
Description check ✅ Passed The PR description is comprehensive and well-structured, covering changes, test results, and implementation details across all affected packages and providers.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch 526-featai-openrouter-support-streaming-structured-output-response_format-json_schema-with-stream-true

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 5, 2026

🚀 Changeset Version Preview

14 package(s) bumped directly, 20 bumped as dependents.

🟥 Major bumps

Package Version Reason
@tanstack/ai-anthropic 0.8.6 → 1.0.0 Changeset
@tanstack/ai-fal 0.7.3 → 1.0.0 Changeset
@tanstack/ai-gemini 0.10.3 → 1.0.0 Changeset
@tanstack/ai-grok 0.7.3 → 1.0.0 Changeset
@tanstack/ai-groq 0.1.11 → 1.0.0 Changeset
@tanstack/ai-ollama 0.6.13 → 1.0.0 Changeset
@tanstack/ai-openai 0.8.5 → 1.0.0 Changeset
@tanstack/ai-openrouter 0.8.5 → 1.0.0 Changeset
@tanstack/ai-react 0.8.2 → 1.0.0 Changeset
@tanstack/ai-solid 0.7.2 → 1.0.0 Changeset
@tanstack/ai-svelte 0.7.2 → 1.0.0 Changeset
@tanstack/ai-vue 0.7.2 → 1.0.0 Changeset
@tanstack/openai-base 0.2.1 → 1.0.0 Changeset
@tanstack/ai-code-mode 0.1.10 → 1.0.0 Dependent
@tanstack/ai-code-mode-skills 0.1.10 → 1.0.0 Dependent
@tanstack/ai-elevenlabs 0.2.3 → 1.0.0 Dependent
@tanstack/ai-event-client 0.3.0 → 1.0.0 Dependent
@tanstack/ai-isolate-node 0.1.10 → 1.0.0 Dependent
@tanstack/ai-isolate-quickjs 0.1.10 → 1.0.0 Dependent
@tanstack/ai-preact 0.6.22 → 1.0.0 Dependent
@tanstack/ai-react-ui 0.6.3 → 1.0.0 Dependent
@tanstack/ai-solid-ui 0.6.3 → 1.0.0 Dependent

🟨 Minor bumps

Package Version Reason
@tanstack/ai 0.16.0 → 0.17.0 Changeset

🟩 Patch bumps

Package Version Reason
@tanstack/ai-client 0.9.1 → 0.9.2 Dependent
@tanstack/ai-code-mode-models-eval 0.0.15 → 0.0.16 Dependent
@tanstack/ai-devtools-core 0.3.27 → 0.3.28 Dependent
@tanstack/ai-isolate-cloudflare 0.2.1 → 0.2.2 Dependent
@tanstack/ai-vue-ui 0.1.33 → 0.1.34 Dependent
@tanstack/preact-ai-devtools 0.1.31 → 0.1.32 Dependent
@tanstack/react-ai-devtools 0.2.31 → 0.2.32 Dependent
@tanstack/solid-ai-devtools 0.2.31 → 0.2.32 Dependent
ts-svelte-chat 0.1.41 → 0.1.42 Dependent
ts-vue-chat 0.1.41 → 0.1.42 Dependent
vanilla-chat 0.0.37 → 0.0.38 Dependent

@nx-cloud
Copy link
Copy Markdown

nx-cloud Bot commented May 5, 2026

View your CI Pipeline Execution ↗ for commit 603cd6b

Command Status Duration Result
nx run-many --targets=build --exclude=examples/** ✅ Succeeded 59s View ↗

☁️ Nx Cloud last updated this comment at 2026-05-14 13:34:23 UTC

@pkg-pr-new
Copy link
Copy Markdown

pkg-pr-new Bot commented May 5, 2026

Open in StackBlitz

@tanstack/ai

npm i https://pkg.pr.new/@tanstack/ai@527

@tanstack/ai-anthropic

npm i https://pkg.pr.new/@tanstack/ai-anthropic@527

@tanstack/ai-client

npm i https://pkg.pr.new/@tanstack/ai-client@527

@tanstack/ai-code-mode

npm i https://pkg.pr.new/@tanstack/ai-code-mode@527

@tanstack/ai-code-mode-skills

npm i https://pkg.pr.new/@tanstack/ai-code-mode-skills@527

@tanstack/ai-devtools-core

npm i https://pkg.pr.new/@tanstack/ai-devtools-core@527

@tanstack/ai-elevenlabs

npm i https://pkg.pr.new/@tanstack/ai-elevenlabs@527

@tanstack/ai-event-client

npm i https://pkg.pr.new/@tanstack/ai-event-client@527

@tanstack/ai-fal

npm i https://pkg.pr.new/@tanstack/ai-fal@527

@tanstack/ai-gemini

npm i https://pkg.pr.new/@tanstack/ai-gemini@527

@tanstack/ai-grok

npm i https://pkg.pr.new/@tanstack/ai-grok@527

@tanstack/ai-groq

npm i https://pkg.pr.new/@tanstack/ai-groq@527

@tanstack/ai-isolate-cloudflare

npm i https://pkg.pr.new/@tanstack/ai-isolate-cloudflare@527

@tanstack/ai-isolate-node

npm i https://pkg.pr.new/@tanstack/ai-isolate-node@527

@tanstack/ai-isolate-quickjs

npm i https://pkg.pr.new/@tanstack/ai-isolate-quickjs@527

@tanstack/ai-ollama

npm i https://pkg.pr.new/@tanstack/ai-ollama@527

@tanstack/ai-openai

npm i https://pkg.pr.new/@tanstack/ai-openai@527

@tanstack/ai-openrouter

npm i https://pkg.pr.new/@tanstack/ai-openrouter@527

@tanstack/ai-preact

npm i https://pkg.pr.new/@tanstack/ai-preact@527

@tanstack/ai-react

npm i https://pkg.pr.new/@tanstack/ai-react@527

@tanstack/ai-react-ui

npm i https://pkg.pr.new/@tanstack/ai-react-ui@527

@tanstack/ai-solid

npm i https://pkg.pr.new/@tanstack/ai-solid@527

@tanstack/ai-solid-ui

npm i https://pkg.pr.new/@tanstack/ai-solid-ui@527

@tanstack/ai-svelte

npm i https://pkg.pr.new/@tanstack/ai-svelte@527

@tanstack/ai-utils

npm i https://pkg.pr.new/@tanstack/ai-utils@527

@tanstack/ai-vue

npm i https://pkg.pr.new/@tanstack/ai-vue@527

@tanstack/ai-vue-ui

npm i https://pkg.pr.new/@tanstack/ai-vue-ui@527

@tanstack/openai-base

npm i https://pkg.pr.new/@tanstack/openai-base@527

@tanstack/preact-ai-devtools

npm i https://pkg.pr.new/@tanstack/preact-ai-devtools@527

@tanstack/react-ai-devtools

npm i https://pkg.pr.new/@tanstack/react-ai-devtools@527

@tanstack/solid-ai-devtools

npm i https://pkg.pr.new/@tanstack/solid-ai-devtools@527

commit: af1e6ef

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
.changeset/streaming-structured-output-openrouter.md (1)

1-6: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Missing changeset entry for @tanstack/ai.

The changeset only bumps @tanstack/ai-openrouter, but according to the PR summary and the AI-generated diff summary, @tanstack/ai also ships public API additions in this PR:

  • StructuredOutputCompleteEvent / StructuredOutputStream types added to packages/typescript/ai/src/types.ts
  • New outputSchema && stream === true path in packages/typescript/ai/src/activities/chat/index.ts
  • BaseTextAdapter behaviour change (default structuredOutputStream removed)

Without a corresponding entry, @tanstack/ai won't receive a version bump when the changeset is consumed.

📦 Suggested addition to the changeset
 ---
+'@tanstack/ai': minor
 '@tanstack/ai-openrouter': minor
 ---

As per coding guidelines: .changeset/**/*.md — create a changeset with pnpm changeset before making changes for release management.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In @.changeset/streaming-structured-output-openrouter.md around lines 1 - 6, Add
a new changeset markdown entry that also bumps `@tanstack/ai` (in addition to
`@tanstack/ai-openrouter`) and documents the public API additions: include the new
types StructuredOutputCompleteEvent and StructuredOutputStream (from
packages/typescript/ai/src/types.ts), the new outputSchema && stream === true
chat path (packages/typescript/ai/src/activities/chat/index.ts), and the
BaseTextAdapter behavior change (removal/default change of
structuredOutputStream); ensure the changeset message explains these API
additions/behavior change so `@tanstack/ai` is versioned when the changeset is
released.
.changeset/streaming-structured-output-chat.md (1)

1-6: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

@tanstack/ai-openrouter is missing from the changeset packages block.

OpenRouterTextAdapter.structuredOutputStream is a new public method in packages/typescript/ai-openrouter — a user-visible feature addition. Without a corresponding entry in the changeset, the package version won't be bumped on release.

📦 Suggested fix
 ---
 '@tanstack/ai': minor
+'@tanstack/ai-openrouter': minor
 ---
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In @.changeset/streaming-structured-output-chat.md around lines 1 - 6, The
changeset is missing a package entry for the new public method
OpenRouterTextAdapter.structuredOutputStream, so add the package name for the
ai-openrouter adapter to the changeset packages block and mark it as a minor
release (public API addition); update the same changeset that introduced
streaming-structured-output-chat to include '@tanstack/ai-openrouter': minor so
the package version is bumped on release and consumers get the new
structuredOutputStream method.
🧹 Nitpick comments (2)
packages/typescript/ai-openrouter/src/adapters/text.ts (1)

266-530: 🏗️ Heavy lift

structuredOutputStream duplicates ~150 lines of stream lifecycle boilerplate from chatStream.

The AGUIState initialization (lines 272–291), the per-chunk RUN_STARTED emission + inline error dispatch + processChoice loop (lines 324–376), and the catch block structure (lines 490–529) are near-verbatim copies of the equivalent blocks in chatStream. Any future change to stream lifecycle handling (e.g., abort signal threading, new event types, logging conventions) must be applied to both methods independently, increasing divergence risk.

Consider extracting a shared runStreamLoop(stream, aguiState, accumulators, onFinalize) helper that takes the finalization callback as a parameter — the only materially different code between the two methods.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@packages/typescript/ai-openrouter/src/adapters/text.ts` around lines 266 -
530, structuredOutputStream duplicates ~150 lines of stream lifecycle
boilerplate from chatStream (AGUIState init, per-chunk RUN_STARTED/error
handling, processChoice loop, and catch block); extract this shared logic into a
new helper (e.g., runStreamLoop) that accepts the stream iterator, the
aguiState, accumulators (accumulatedReasoning, accumulatedContent,
toolCallBuffers), and a finalization callback (onFinalize) to run the
method-specific wrap-up (JSON parsing, structured-output CUSTOM emission, and
RUN_FINISHED); update structuredOutputStream to build its AGUIState and
accumulators, call runStreamLoop(stream, aguiState, {accumulatedReasoning,
accumulatedContent, toolCallBuffers}, onFinalize) and move processChoice usage,
per-chunk yields, and catch handling into the shared helper so both
structuredOutputStream and chatStream reuse the same lifecycle code.
packages/typescript/ai/src/activities/chat/index.ts (1)

1717-1722: 💤 Low value

mock- ID prefixes leak into real adapter fallback runs.

fallbackStructuredOutputStream is the production path for any adapter without a native structuredOutputStream, but the synthesized runId / threadId / messageId use a mock- prefix. That looks like test scaffolding in observability/devtools output and is inconsistent with createId('run' | 'thread' | 'msg') used elsewhere in this engine.

♻️ Use neutral prefixes
-  const runId = chatOptions.runId ?? `mock-${Date.now()}`
-  const threadId = chatOptions.threadId ?? `mock-${Date.now()}`
-  const messageId = `mock-${Date.now()}-${Math.random().toString(36).slice(2)}`
+  const rand = () => Math.random().toString(36).slice(2, 9)
+  const runId = chatOptions.runId ?? `run-${Date.now()}-${rand()}`
+  const threadId = chatOptions.threadId ?? `thread-${Date.now()}-${rand()}`
+  const messageId = `msg-${Date.now()}-${rand()}`
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@packages/typescript/ai/src/activities/chat/index.ts` around lines 1717 -
1722, The current fallback ID generation in chat (variables chatOptions, runId,
threadId, messageId) uses "mock-" prefixes which leak into production telemetry;
replace the mock-prefixed defaults with the same neutral ID generation used
elsewhere (e.g., call the shared createId helper with 'run' | 'thread' | 'msg'
or use the existing engine's neutral prefix strategy) so
runId/threadId/messageId are generated consistently when chatOptions doesn't
supply them; update the fallback logic in the code that reads
chatOptions.model/timestamp to use createId for runId and threadId and a
non-mock random message id for messageId.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In @.changeset/streaming-structured-output-chat.md:
- Line 5: The changelog line incorrectly states that BaseTextAdapter provides a
default structuredOutputStream; update the description to say BaseTextAdapter
does not implement structuredOutputStream and that the activity layer's
fallbackStructuredOutputStream is the single source of truth for non-streaming
adapters. Reference BaseTextAdapter and structuredOutputStream in
packages/typescript/ai/src/activities/chat/adapter.ts and mention
fallbackStructuredOutputStream in the activity layer as the fallback, and advise
adapter authors to implement structuredOutputStream if they need native
streaming behavior.

In `@packages/typescript/ai-openrouter/src/adapters/text.ts`:
- Around line 378-440: The finalization path for empty non-exception streams
fails to emit RUN_STARTED if no chunks arrived; update the end-of-stream logic
in the generator (the block after the streaming loop that currently emits
RUN_ERROR for empty content) to check hasEmittedRunStarted and, if false, set
hasEmittedRunStarted = true and yield the same RUN_STARTED chunk that the normal
streaming path emits (include runId from aguiState.runId, model resolvedModel,
and timestamp) before yielding RUN_ERROR; this mirrors the catch block behavior
and ensures the AG-UI lifecycle (hasEmittedRunStarted, RUN_STARTED, then
RUN_ERROR) is preserved.

In `@packages/typescript/ai/src/activities/chat/index.ts`:
- Around line 1860-1886: The loop over engine.run() can yield a RUN_ERROR chunk
without throwing, but the code always proceeds to call engine.getMessages() and
adapter.structuredOutputStream(...); fix by detecting an error-yield and
short-circuiting: inside the for-await loop in runStreamingStructuredOutput (the
block that calls engine.run()), when you yield a chunk of type 'RUN_ERROR' set a
local flag (e.g., sawRunError) or reuse earlyTermination set by
handleRunErrorEvent, then after the loop but before calling finalMessages =
engine.getMessages() and adapter.structuredOutputStream(...), check that flag
and return early (or skip the structured-output call) so no structured-output
flow is started after an error. Ensure the check references the same symbols:
engine.run(), handleRunErrorEvent, finalMessages, and
adapter.structuredOutputStream.

In `@testing/e2e/README.md`:
- Line 126: Update the documented prefix list to include the undocumented prefix
used by the abort fixture: add "[structured-stream-abort]" to the "Existing
prefixes" line so it reflects the actual usage in
fixtures/structured-output-stream/abort.json; ensure the exact token
"[structured-stream-abort]" is added alongside "[structured-stream]" to prevent
future collisions.

---

Outside diff comments:
In @.changeset/streaming-structured-output-chat.md:
- Around line 1-6: The changeset is missing a package entry for the new public
method OpenRouterTextAdapter.structuredOutputStream, so add the package name for
the ai-openrouter adapter to the changeset packages block and mark it as a minor
release (public API addition); update the same changeset that introduced
streaming-structured-output-chat to include '@tanstack/ai-openrouter': minor so
the package version is bumped on release and consumers get the new
structuredOutputStream method.

In @.changeset/streaming-structured-output-openrouter.md:
- Around line 1-6: Add a new changeset markdown entry that also bumps
`@tanstack/ai` (in addition to `@tanstack/ai-openrouter`) and documents the public
API additions: include the new types StructuredOutputCompleteEvent and
StructuredOutputStream (from packages/typescript/ai/src/types.ts), the new
outputSchema && stream === true chat path
(packages/typescript/ai/src/activities/chat/index.ts), and the BaseTextAdapter
behavior change (removal/default change of structuredOutputStream); ensure the
changeset message explains these API additions/behavior change so `@tanstack/ai`
is versioned when the changeset is released.

---

Nitpick comments:
In `@packages/typescript/ai-openrouter/src/adapters/text.ts`:
- Around line 266-530: structuredOutputStream duplicates ~150 lines of stream
lifecycle boilerplate from chatStream (AGUIState init, per-chunk
RUN_STARTED/error handling, processChoice loop, and catch block); extract this
shared logic into a new helper (e.g., runStreamLoop) that accepts the stream
iterator, the aguiState, accumulators (accumulatedReasoning, accumulatedContent,
toolCallBuffers), and a finalization callback (onFinalize) to run the
method-specific wrap-up (JSON parsing, structured-output CUSTOM emission, and
RUN_FINISHED); update structuredOutputStream to build its AGUIState and
accumulators, call runStreamLoop(stream, aguiState, {accumulatedReasoning,
accumulatedContent, toolCallBuffers}, onFinalize) and move processChoice usage,
per-chunk yields, and catch handling into the shared helper so both
structuredOutputStream and chatStream reuse the same lifecycle code.

In `@packages/typescript/ai/src/activities/chat/index.ts`:
- Around line 1717-1722: The current fallback ID generation in chat (variables
chatOptions, runId, threadId, messageId) uses "mock-" prefixes which leak into
production telemetry; replace the mock-prefixed defaults with the same neutral
ID generation used elsewhere (e.g., call the shared createId helper with 'run' |
'thread' | 'msg' or use the existing engine's neutral prefix strategy) so
runId/threadId/messageId are generated consistently when chatOptions doesn't
supply them; update the fallback logic in the code that reads
chatOptions.model/timestamp to use createId for runId and threadId and a
non-mock random message id for messageId.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a66ef4a2-3824-4fbc-ad8f-03cfd2e1f307

📥 Commits

Reviewing files that changed from the base of the PR and between ff33855 and bd550bf.

📒 Files selected for processing (17)
  • .changeset/streaming-structured-output-chat.md
  • .changeset/streaming-structured-output-openrouter.md
  • packages/typescript/ai-openrouter/src/adapters/text.ts
  • packages/typescript/ai-openrouter/tests/openrouter-adapter.test.ts
  • packages/typescript/ai/src/activities/chat/adapter.ts
  • packages/typescript/ai/src/activities/chat/index.ts
  • packages/typescript/ai/src/types.ts
  • testing/e2e/README.md
  • testing/e2e/fixtures/structured-output-stream/abort.json
  • testing/e2e/fixtures/structured-output-stream/basic.json
  • testing/e2e/src/components/ChatUI.tsx
  • testing/e2e/src/lib/feature-support.ts
  • testing/e2e/src/lib/features.ts
  • testing/e2e/src/lib/types.ts
  • testing/e2e/src/routes/$provider/$feature.tsx
  • testing/e2e/src/routes/api.chat.ts
  • testing/e2e/tests/structured-output-stream.spec.ts

Comment thread .changeset/streaming-structured-output-chat.md Outdated
Comment thread packages/typescript/ai-openrouter/src/adapters/text.ts Outdated
Comment thread packages/typescript/ai/src/activities/chat/index.ts
Comment thread testing/e2e/README.md Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@examples/ts-react-chat/src/routes/api.structured-output.ts`:
- Around line 76-80: The non-streaming branch calls chat(...) but doesn't pass
the HTTP request abort signal, so provider work continues after client
disconnects; modify the call to propagate the request's abort signal (e.g., pass
req.signal or a derived AbortSignal) into chat and/or the adapter: update the
chat invocation in the non-streaming path to include signal: req.signal (or
create an AbortController that ties to req.signal and pass that) and ensure
adapterFor(...) or the adapter returned accepts/forwards that signal so provider
requests are cancelled when the client disconnects.
- Around line 30-43: The POST body should be parsed and validated inside a
try/catch and you must validate the provider before calling adapterFor to return
a 400 for bad input; move the request.json() call into the try block, validate
required fields (e.g., provider and optional model types), and if provider is
missing or not one of the expected values return a 400 JSON error instead of
proceeding. Also harden adapterFor by adding a default/else branch (or throw)
when provider is unknown so it cannot return undefined (refer to adapterFor and
its switch cases like 'openai'/'grok'/'groq'/'openrouter'); ensure any casted
model strings are validated or fall back to safe defaults only after input
validation.

In `@packages/typescript/ai-grok/src/adapters/text.ts`:
- Around line 447-450: The logger currently sends the raw SDK error
(logger.errors('grok.structuredOutputStream fatal', { error, ... })) which may
contain sensitive request/auth metadata; replace the raw error with a
sanitized/normalized error object like the one used in
packages/typescript/ai-openai/src/adapters/text.ts (e.g., call the same
sanitize/normalizeOpenAIError helper from that module or replicate its behavior)
and log only the sanitizedError (and minimal context fields) instead of the raw
SDK error so request-level details are not leaked.

In `@packages/typescript/ai-openai/src/adapters/text.ts`:
- Around line 312-424: The loop currently only handles
'response.output_text.delta' and drops 'response.reasoning_text.delta' and
'response.reasoning_summary_text.delta', so add handling for those chunk.type
values: create a separate accumulator (e.g., reasoningAccumulatedContent) and
flags (similar to hasEmittedTextMessageStart) to collect and emit reasoning
deltas via yield asChunk (use types analogous to TEXT_MESSAGE_START/CONTENT or a
clear reasoning event), append deltas when chunk.delta is string|array, and
ensure when the run completes ('response.completed' / final structured-output
emission) you attach the accumulated reasoning to the structured-output
completion payload (structured-output.complete.value.reasoning) so
schema-failure post-mortems receive the model's reasoning; use existing symbols
runId, messageId, model, timestamp, asChunk, accumulatedContent for locating
where to add this logic.

In `@testing/e2e/src/lib/feature-support.ts`:
- Around line 83-85: Update the comment to remove the stale reference to
BaseTextAdapter and instead mention the activity-layer
fallbackStructuredOutputStream as the fallback handler; edit the block that
currently references "BaseTextAdapter implementation" so it reads that other
providers fall back to the activity-layer fallbackStructuredOutputStream (or
similar phrasing) and keep the rest of the comment about providers with native
streaming JSON schema support intact to avoid confusion during matrix
maintenance.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: d0a46433-8d08-486d-aaac-9c40b3c58f5d

📥 Commits

Reviewing files that changed from the base of the PR and between bd550bf and 0087eb9.

📒 Files selected for processing (12)
  • .changeset/streaming-structured-output-grok.md
  • .changeset/streaming-structured-output-groq.md
  • .changeset/streaming-structured-output-openai.md
  • examples/ts-react-chat/src/routes/api.structured-output.ts
  • examples/ts-react-chat/src/routes/generations.structured-output.tsx
  • packages/typescript/ai-grok/src/adapters/text.ts
  • packages/typescript/ai-grok/tests/grok-adapter.test.ts
  • packages/typescript/ai-groq/src/adapters/text.ts
  • packages/typescript/ai-groq/tests/groq-adapter.test.ts
  • packages/typescript/ai-openai/src/adapters/text.ts
  • packages/typescript/ai-openai/tests/openai-adapter.test.ts
  • testing/e2e/src/lib/feature-support.ts
✅ Files skipped from review due to trivial changes (3)
  • .changeset/streaming-structured-output-openai.md
  • .changeset/streaming-structured-output-grok.md
  • .changeset/streaming-structured-output-groq.md

Comment thread examples/ts-react-chat/src/routes/api.structured-output.ts
Comment thread examples/ts-react-chat/src/routes/api.structured-output.ts
Comment thread packages/typescript/ai-grok/src/adapters/text.ts Outdated
Comment thread packages/typescript/ai-openai/src/adapters/text.ts Outdated
Comment thread testing/e2e/src/lib/feature-support.ts Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@examples/ts-react-chat/src/routes/generations.structured-output.tsx`:
- Around line 281-300: The labels "Provider", "Model" and "Prompt" are not
linked to their form controls; add id attributes to the corresponding <select>
and <textarea> elements (e.g., "provider-select", "model-select",
"prompt-textarea") and set the matching htmlFor on each <label> so the label
text is associated with the control (affects the <select value={provider}
onChange={onProviderChange} disabled={isLoading}>, the <select value={model}
onChange={e => setModel(e.target.value} disabled={isLoading}>, and the prompt
<textarea>), ensuring unique ids and keeping existing props like isLoading
intact.
- Around line 222-229: The handler updates reasoningFull when a streamed chunk
contains a reasoning string (setReasoningFull on chunk.value) but never
recomputes or updates the one-line summary reasoningLine, so the “Thinking”
strip can remain blank or stale; after calling setReasoningFull((chunk.value as
{ reasoning: string }).reasoning) also derive and call setReasoningLine with a
compact one-line version (e.g., trim and collapse whitespace or take the first
sentence) so that both reasoningFull and reasoningLine stay consistent when
reasoning is sent only in the final structured-output.complete or when the final
chunk extends earlier reasoning.
- Around line 170-235: The reader loop currently exits on EOF without failing if
the canonical completion signal ("structured-output.complete") never arrived;
introduce a local boolean flag (e.g. sawFinalResult = false) and set it to true
inside the branch that handles chunk.type === 'CUSTOM' && chunk.name ===
'structured-output.complete' (also still call setHasFinalResult(true)). After
the outer while (after you break on done from reader.read()), check if
(!sawFinalResult) and throw a descriptive Error (e.g. 'Stream ended before
structured-output.complete arrived') so the run fails on EOF when the final
structured-output payload was not observed.
- Around line 163-174: The stream decoding loop uses a TextDecoder with {stream:
true} but never flushes remaining buffered bytes when the reader loop exits;
after the while(true) loop that reads from response.body!.getReader() and
appends decoder.decode(value, {stream: true}) into buffer, call decoder.decode()
once more (without the stream flag) and append its result to buffer (or
otherwise process it) before continuing with accumulated/reasoning/deltas
handling so any partial UTF-8 sequences are correctly finalized.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 49bf45f2-6618-4113-8aea-d644b0809d7c

📥 Commits

Reviewing files that changed from the base of the PR and between 0087eb9 and 0a7f31a.

📒 Files selected for processing (7)
  • examples/ts-react-chat/src/routes/api.structured-output.ts
  • examples/ts-react-chat/src/routes/generations.structured-output.tsx
  • packages/typescript/ai-grok/src/adapters/text.ts
  • packages/typescript/ai-grok/tests/grok-adapter.test.ts
  • packages/typescript/ai-groq/src/adapters/text.ts
  • packages/typescript/ai-groq/tests/groq-adapter.test.ts
  • packages/typescript/ai-openai/src/adapters/text.ts
🚧 Files skipped from review as they are similar to previous changes (3)
  • examples/ts-react-chat/src/routes/api.structured-output.ts
  • packages/typescript/ai-groq/src/adapters/text.ts
  • packages/typescript/ai-groq/tests/groq-adapter.test.ts

Comment thread examples/ts-react-chat/src/routes/generations.structured-output.tsx Outdated
Comment thread examples/ts-react-chat/src/routes/generations.structured-output.tsx Outdated
Comment thread examples/ts-react-chat/src/routes/generations.structured-output.tsx
Comment thread examples/ts-react-chat/src/routes/generations.structured-output.tsx Outdated
Comment thread packages/typescript/ai-grok/src/adapters/text.ts Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (6)
examples/ts-react-chat/src/routes/api.structured-output.ts (2)

97-105: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Validate the POST body before casting it.

request.json() runs outside the try, and the unchecked cast lets malformed JSON or an unsupported provider fall through as a 500. Parse the body inside the try with a Zod schema and return 400 on invalid input instead of letting adapterFor() receive unchecked data.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@examples/ts-react-chat/src/routes/api.structured-output.ts` around lines 97 -
105, Move parsing of the POST body into the try block and validate it with a Zod
schema before casting so malformed JSON or invalid provider values return a 400
instead of reaching adapterFor(); specifically, define a Zod schema for {
prompt: string, provider?: Provider, model?: string, stream?: boolean }, call
await request.json() inside the try, parse/validate with schema.parse or
safeParse, and if validation fails return a 400 response; then compute
resolvedProvider (fallback to 'openrouter') only from validated data and pass
that safe value to adapterFor().

127-132: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Propagate disconnect aborts through the non-streaming chat() call too.

Only the streaming branch cancels provider work when request.signal aborts. The non-streaming call will keep running after the client is gone, burning tokens and tying up server capacity.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@examples/ts-react-chat/src/routes/api.structured-output.ts` around lines 127
- 132, The non-streaming chat call doesn't receive the client's abort signal, so
work continues after the client disconnects; update the chat invocation to pass
through the abort signal (e.g., include signal: request.signal) so that
chat(...) (and downstream adapterFor/resolvedProvider work) can cancel when the
request is aborted; ensure the chat call's options include the signal property
alongside adapter, modelOptions, messages, and outputSchema.
examples/ts-react-chat/src/routes/generations.structured-output.tsx (4)

279-299: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Associate the visible labels with their controls.

Provider, Model, and Prompt are rendered as standalone labels, so the selects and textarea lose explicit accessible names and label-click focus.

Also applies to: 326-334

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@examples/ts-react-chat/src/routes/generations.structured-output.tsx` around
lines 279 - 299, The labels "Provider", "Model" and "Prompt" are not associated
with their form controls; add explicit associations by giving each
select/textarea a unique id (e.g., providerSelect, modelSelect, promptTextarea)
and set the corresponding label's htmlFor to that id (or wrap the control inside
the label). Update the JSX around the provider select (value={provider},
onChange={onProviderChange}), the model select (value={model},
onChange={setModel}), and the prompt textarea to use those ids so clicking a
label focuses the correct control and improves accessibility.

220-227: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Keep reasoningLine in sync with terminal reasoning.

This branch overwrites reasoningFull but leaves the one-line strip stale. If reasoning only arrives on structured-output.complete, the “Thinking” summary stays blank or outdated.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@examples/ts-react-chat/src/routes/generations.structured-output.tsx` around
lines 220 - 227, When you set reasoningFull from the chunk (inside the branch
that checks (chunk.value as { reasoning?: string }).reasoning), also update
reasoningLine so it stays in sync: assign reasoningFull via
setReasoningFull((chunk.value as { reasoning: string }).reasoning) and
immediately call setReasoningLine with a one-line/trimmed version (e.g., first
line or trimmed slice) of the same value; update the same logic path that
handles structured-output.complete so both state variables (reasoningFull and
reasoningLine) are always updated from the same source.

168-233: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Fail the run if EOF arrives before structured-output.complete.

The loop currently exits successfully on EOF even when only partial deltas were received. If the connection drops before the terminal event, this page leaves a partial object rendered with no error.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@examples/ts-react-chat/src/routes/generations.structured-output.tsx` around
lines 168 - 233, The reader loop currently breaks on EOF even if the terminal
structured-output.complete was never received; add a local boolean (e.g.,
receivedFinalResult) initialized false, set it true in the branch where you call
setResult(...) and setHasFinalResult(true) for the
CUSTOM/structured-output.complete chunk (the same place that currently
setsHasFinalResult), and change the EOF handling (when done is true) to throw an
Error (or reject) if receivedFinalResult is still false (e.g., throw new
Error('Stream ended before structured-output.complete')). This ensures the run
fails on premature EOF while preserving existing state updates.

168-233: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Flush the TextDecoder after the read loop.

decoder.decode(value, { stream: true }) can buffer trailing UTF-8 bytes. Without a final decoder.decode(), the last SSE frame can be truncated or dropped when a multibyte character spans reads.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@examples/ts-react-chat/src/routes/generations.structured-output.tsx` around
lines 168 - 233, The TextDecoder may have buffered trailing UTF-8 bytes; after
the read loop that uses decoder.decode(value, { stream: true }) (the while
(true) { const { done, value } = await reader.read() ... } block) call
decoder.decode() with no arguments to flush remaining bytes and append the
result to buffer before continuing to parse frames, then run the same
frame-parsing logic on the updated buffer; update references to buffer, decoder,
and the reader loop/stream parsing code so the final partial multibyte
characters are not dropped.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Duplicate comments:
In `@examples/ts-react-chat/src/routes/api.structured-output.ts`:
- Around line 97-105: Move parsing of the POST body into the try block and
validate it with a Zod schema before casting so malformed JSON or invalid
provider values return a 400 instead of reaching adapterFor(); specifically,
define a Zod schema for { prompt: string, provider?: Provider, model?: string,
stream?: boolean }, call await request.json() inside the try, parse/validate
with schema.parse or safeParse, and if validation fails return a 400 response;
then compute resolvedProvider (fallback to 'openrouter') only from validated
data and pass that safe value to adapterFor().
- Around line 127-132: The non-streaming chat call doesn't receive the client's
abort signal, so work continues after the client disconnects; update the chat
invocation to pass through the abort signal (e.g., include signal:
request.signal) so that chat(...) (and downstream adapterFor/resolvedProvider
work) can cancel when the request is aborted; ensure the chat call's options
include the signal property alongside adapter, modelOptions, messages, and
outputSchema.

In `@examples/ts-react-chat/src/routes/generations.structured-output.tsx`:
- Around line 279-299: The labels "Provider", "Model" and "Prompt" are not
associated with their form controls; add explicit associations by giving each
select/textarea a unique id (e.g., providerSelect, modelSelect, promptTextarea)
and set the corresponding label's htmlFor to that id (or wrap the control inside
the label). Update the JSX around the provider select (value={provider},
onChange={onProviderChange}), the model select (value={model},
onChange={setModel}), and the prompt textarea to use those ids so clicking a
label focuses the correct control and improves accessibility.
- Around line 220-227: When you set reasoningFull from the chunk (inside the
branch that checks (chunk.value as { reasoning?: string }).reasoning), also
update reasoningLine so it stays in sync: assign reasoningFull via
setReasoningFull((chunk.value as { reasoning: string }).reasoning) and
immediately call setReasoningLine with a one-line/trimmed version (e.g., first
line or trimmed slice) of the same value; update the same logic path that
handles structured-output.complete so both state variables (reasoningFull and
reasoningLine) are always updated from the same source.
- Around line 168-233: The reader loop currently breaks on EOF even if the
terminal structured-output.complete was never received; add a local boolean
(e.g., receivedFinalResult) initialized false, set it true in the branch where
you call setResult(...) and setHasFinalResult(true) for the
CUSTOM/structured-output.complete chunk (the same place that currently
setsHasFinalResult), and change the EOF handling (when done is true) to throw an
Error (or reject) if receivedFinalResult is still false (e.g., throw new
Error('Stream ended before structured-output.complete')). This ensures the run
fails on premature EOF while preserving existing state updates.
- Around line 168-233: The TextDecoder may have buffered trailing UTF-8 bytes;
after the read loop that uses decoder.decode(value, { stream: true }) (the while
(true) { const { done, value } = await reader.read() ... } block) call
decoder.decode() with no arguments to flush remaining bytes and append the
result to buffer before continuing to parse frames, then run the same
frame-parsing logic on the updated buffer; update references to buffer, decoder,
and the reader loop/stream parsing code so the final partial multibyte
characters are not dropped.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 5087f1ff-61cd-4360-9d0e-51d3117b3ba0

📥 Commits

Reviewing files that changed from the base of the PR and between 0a7f31a and b14800e.

📒 Files selected for processing (2)
  • examples/ts-react-chat/src/routes/api.structured-output.ts
  • examples/ts-react-chat/src/routes/generations.structured-output.tsx

@tombeckenham tombeckenham requested a review from AlemTuzlak May 5, 2026 09:59
@tombeckenham
Copy link
Copy Markdown
Contributor Author

Structured streaming in action

Screen.Recording.2026-05-05.at.5.26.00.pm.mov

@autofix-troubleshooter
Copy link
Copy Markdown

Hi! I'm the autofix logoautofix.ci troubleshooter bot.

It looks like you correctly set up a CI job that uses the autofix.ci GitHub Action, but the autofix.ci GitHub App has not been installed for this repository. This means that autofix.ci unfortunately does not have the permissions to fix this pull request. If you are the repository owner, please install the app and then restart the CI workflow! 😃

@tombeckenham tombeckenham changed the base branch from main to 543-migrate-ai-groq-ai-openrouter-ai-ollama-to-openai-base-+-parameterize-the-base-for-sdk-shape-variance May 12, 2026 06:03
@tombeckenham tombeckenham force-pushed the 526-featai-openrouter-support-streaming-structured-output-response_format-json_schema-with-stream-true branch 2 times, most recently from 84a07ab to 2dc520b Compare May 13, 2026 07:27
@tombeckenham tombeckenham force-pushed the 543-migrate-ai-groq-ai-openrouter-ai-ollama-to-openai-base-+-parameterize-the-base-for-sdk-shape-variance branch from aeb4d6e to 7aff8b1 Compare May 13, 2026 10:35
@tombeckenham tombeckenham force-pushed the 526-featai-openrouter-support-streaming-structured-output-response_format-json_schema-with-stream-true branch from bb52f59 to 557b860 Compare May 13, 2026 10:59
@tombeckenham tombeckenham changed the title feat: streaming structured output (chat outputSchema + stream:true) feat: streaming structured output across openai/openrouter/grok/groq + summarize fix May 13, 2026
Comment thread packages/typescript/ai-openrouter/src/adapters/responses-text.ts Outdated
Comment thread packages/typescript/ai-openrouter/src/adapters/responses-text.ts Outdated
Comment thread packages/typescript/ai-openrouter/src/adapters/responses-text.ts Outdated
Comment thread packages/typescript/ai-openrouter/src/adapters/responses-text.ts Outdated
Comment thread packages/typescript/ai-openrouter/src/adapters/text.ts Outdated
Comment thread packages/typescript/ai/src/activities/chat/index.ts
Comment thread packages/typescript/ai/src/activities/chat/index.ts Outdated
Comment thread packages/typescript/ai/src/activities/chat/index.ts Outdated
tombeckenham and others added 6 commits May 14, 2026 12:29
…mple wiring

Packages
- ai-openai: add openaiChatCompletions / OpenAIChatCompletionsTextAdapter
  sibling to the existing Responses adapter. Thin subclass of
  OpenAIBaseChatCompletionsTextAdapter so callers can pick the older
  /v1/chat/completions wire format against the OpenAI SDK.
- ai: ChatStreamSummarizeAdapter.summarizeStream now accumulates summary
  text and emits a terminal CUSTOM { name: 'generation:result' } event
  before passing RUN_FINISHED through. Fixes useSummarize never populating
  result in connection/server-fn streaming modes — GenerationClient only
  sets result on that specific CUSTOM event.

ts-react-chat example
- Structured Output menu: drop the misleading '(OpenRouter)' suffix from
  the sidebar entry; relabel the OpenAI option as 'OpenAI (Responses)';
  add 'OpenAI (Chat Completions)' and 'OpenRouter (Responses beta)' so
  the page exposes all four wire-format combinations end-to-end.
- Summarize page: add a model picker (gpt-4o-mini through gpt-5.2) wired
  through to the API route and both server-fns. Drop the hard-coded
  maxLength: 200 which on Responses-API reasoning models gets the whole
  max_output_tokens budget consumed by hidden reasoning; the style
  instruction in the prompt already drives length. Live-render
  TEXT_MESSAGE_CONTENT deltas via onChunk so streaming mode is visibly
  streaming rather than appearing identical to direct.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…structured-output streams

Address PR #527 review feedback from @AlemTuzlak (3 comments on
responses-text.ts, 1 on text.ts, 2 on activities/chat/index.ts):

- ai-openrouter/responses-text.ts (`structuredOutputStream`):
  - Drop `(await this.orClient.beta.responses.send(...)) as AsyncIterable<StreamEvents>` —
    `EventStream<T>` already extends `AsyncIterable<T>`.
  - Drop `as ResponsesRequest['text']` on the inner `text` object — the SDK's
    request type accepts the literal shape directly.
  - Drop inline `(chunk as { ... }).delta` / `(chunk.response ?? {}) as {...}` casts.
    `NormalizedStreamEvent` already types `delta` and `response`; the existing
    `processStreamChunks` reads the same fields without casts.
  - Drop redundant `satisfies StreamChunk` (20×). The `AsyncIterable<StreamChunk>` /
    `Generator<StreamChunk>` return types already validate every yield site via
    contextual typing.

- ai-openrouter/text.ts (`structuredOutputStream`):
  - Drop `(await this.orClient.chat.send(...)) as AsyncIterable<ChatStreamChunk>`.
  - Drop redundant `satisfies StreamChunk` (17×).

- ai/activities/chat/index.ts:
  - Replace `{ chatOptions: TextOptions<any, any>; outputSchema: any }` parameter
    on `fallbackStructuredOutputStream` with `StructuredOutputOptions<Record<string,
    unknown>>` — the adapter-side type already exists.
  - Drop `(adapter as { provider?: string }).provider ?? adapter.name` in the
    structured-stream logger. `provider` is not a `TextAdapter` field; `adapter.name`
    is the canonical provider identifier.
  - Drop redundant `satisfies StreamChunk` / `satisfies StructuredOutputCompleteEvent`
    (8×) in `fallbackStructuredOutputStream` and `runStreamingStructuredOutputImpl`.

- ai/tests/chat-result-types.test.ts (new):
  - Add type-only regression test for `TextActivityResult`. Pins each
    `(outputSchema?, stream?)` combination so #526's streaming-structured-output
    branch can't silently regress to a Promise (or vice versa).
Use narrowed locals (`outputSchema`, `stream`) and explicit `outputSchema: undefined` overrides instead of double-casting `options` through `unknown`. The trailing `as TextActivityResult<TSchema, TStream>` stays — TS narrows value types from runtime guards but not generic type parameters, so the conditional return type can't be reduced from inside a branch.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@tombeckenham tombeckenham force-pushed the 526-featai-openrouter-support-streaming-structured-output-response_format-json_schema-with-stream-true branch from 6988031 to d5daa73 Compare May 14, 2026 02:29
@tombeckenham tombeckenham requested a review from AlemTuzlak May 14, 2026 02:37
tombeckenham and others added 18 commits May 14, 2026 12:38
Cover chat({ outputSchema, stream: true }) in docs/chat/structured-outputs.md
and the ai-core/structured-outputs skill: StructuredOutputStream<T> return
type, isStructuredOutputCompleteEvent example, structured-output.complete
event shape, per-adapter coverage (native vs. fallback), and a HIGH common
mistake against parsing partial JSON deltas. Adds a cross-ref from the skill
to ai-core/chat-experience.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…nks in structuredOutputStream

Public `StructuredOutputStream<T>` is now a discriminated union over three
tagged CUSTOM variants: `structured-output.complete<T>`, `approval-requested`,
and `tool-input-available`. Each has a literal `name` and typed `value`, so
`chunk.type === 'CUSTOM' && chunk.name === '<literal>'` narrows directly to
the exact shape — no `isStructuredOutputCompleteEvent` helper or cast needed.
The bare CustomEvent is excluded from the union (its `value: any` would
collapse the narrow to `any`); user-emitted events via the `emitCustomEvent`
API still flow at runtime as a documented residual gap.

New exports from @tanstack/ai: `ApprovalRequestedEvent`,
`ToolInputAvailableEvent`. The `isStructuredOutputCompleteEvent` helper is
removed (this overload is new in this PR — no shipped consumers).

Per-chunk `logger.provider(...)` debug logging added inside
`structuredOutputStream` for the four affected adapters (openai-base
chat-completions + responses, ai-openrouter text + responses-text), matching
the existing pattern in `chatStream` for end-to-end introspection in debug
mode. ai-openrouter uses `finishReason` (camelCase) consistent with the SDK
and the sibling chatStream logger; openai-base uses `finish_reason` per the
openai SDK shape.

Docs (`docs/chat/structured-outputs.md`) and the AI-core
`structured-outputs` SKILL.md updated to use the direct discriminated
narrow.
Merge the 10 changesets covering this PR (streaming structured output
across chat/grok/groq/openai/openai-base/openrouter, the openrouter
decoupling + narrowing, and the summarize subsystem unification) into a
single `.changeset/streaming-structured-output.md` with the union of
version bumps. The body retains every meaningful section from the
originals (core, openai-base, provider adapters, openrouter decoupling,
summarize) and adds the tagged-CustomEvent type design from the previous
commit.
…lesson

Initial scaffold of `.agent/self-learning/` for the self-improve plugin
(INDEX.md, config.yml, curation-state.yml, coupling.json, .gitignore,
`lessons/promoted/`). Captures the first repo-scoped lesson:
`2026-05-14-build-before-running-examples.md` — run
`pnpm -w run build:all` before starting any example dev server so the
workspace packages have `dist/` outputs vite can resolve.
…s adapter

docs/chat/structured-outputs.md
  Add "Streaming with tools that may pause" subsection covering the
  approval-requested / tool-input-available tagged variants the agent
  loop can emit before structured-output.complete. Code example shows
  the narrowing pattern for all three CUSTOM variants. Cross-links the
  Tool Approval Flow and Client Tools pages.

docs/adapters/openai.md
  Add "Chat Completions API" section after Basic Usage covering the new
  openaiChatCompletions / createOpenaiChatCompletions factories — when
  to pick Chat Completions vs. Responses (reasoning-summary streaming,
  wire-format compatibility), code example, and a link to the Structured
  Outputs page for the streaming case. API Reference at the bottom now
  includes both factories.
…anual iteration to advanced

The previous streaming section opened with \`for await (const chunk of stream)\`
— that's the advanced/server-side-only path. The typical use case is a UI
streaming JSON deltas through SSE from a server endpoint, and the docs should
lead with it.

- New "Server endpoint" subsection: \`chat({outputSchema, stream: true})\` +
  \`toServerSentEventsResponse(stream)\`. One short example, no ceremony.
- New "Client with useChat" subsection: \`useChat\` + \`fetchServerSentEvents\`
  + \`onChunk\`, with \`parsePartialJSON\` driving progressive UI. Shows where
  the validated object lives (the terminal \`structured-output.complete\` event,
  typed as \`T\` via the schema). Notes Vue/Solid/Svelte share the shape.
- "What the stream contains" + "Adapter coverage" tables retained verbatim.
- Old standalone \`for await\` example moved to a new "Advanced: iterating the
  stream directly" subsection at the end, framed as the path for Node scripts,
  CLIs, server-only flows, and tests.
- "Streaming with tools that may pause" reframed to use the \`onChunk\` signature
  (matching the new primary path); a note points back to the advanced section
  for callers iterating the stream directly.
…treaming

Pass the same schema you give chat() on the server to useChat() on the
client, and the hook tracks the progressive object and the validated
terminal payload for you — no external useState, no onChunk ceremony, no
parsePartialJSON calls in user code.

API:

  const { sendMessage, isLoading, partial, final } = useChat({
    connection: fetchServerSentEvents("/api/extract"),
    outputSchema: PersonSchema,
  })

  // partial: DeepPartial<Person>           — updates per TEXT_MESSAGE_CONTENT delta
  // final:   Person | null                  — snaps on structured-output.complete

Implementation:

- New generic param TSchema extends SchemaInput | undefined = undefined on
  UseChatOptions / UseChatReturn / useChat.
- UseChatReturn is conditional on TSchema: when supplied, adds typed
  partial/final; when undefined (default), return is unchanged. Inferred
  automatically from outputSchema option.
- Internal onChunk handler tracks raw JSON buffer via ref, runs
  parsePartialJSON on each TEXT_MESSAGE_CONTENT delta, snaps final on the
  terminal CUSTOM structured-output.complete event, resets all three on
  RUN_STARTED. User's own onChunk callback still fires after internal
  processing — both compose.
- DeepPartial<T> exported for handlers that need to annotate.

The schema is used purely for client-side type inference; server-side
validation still runs against the schema passed to chat({ outputSchema })
on the server route. Works identically for non-streaming endpoints — for
those, partial stays {} and final populates when the single terminal
event arrives.

Type-level tests (tests/use-chat-types.test.ts) pin both branches of the
discriminated return type — useChat() without outputSchema rejects access
to partial/final via @ts-expect-error, useChat() with outputSchema asserts
typed DeepPartial<Person> / Person | null.
…l/final

Apply the same schema-driven structured-output API that landed in
@tanstack/ai-react to the other three framework hooks. Same options shape
(`outputSchema?: TSchema`), same discriminated return type, identical
runtime behavior — only the reactivity primitive differs per framework.

Reactivity primitives:

  Vue    — `Readonly<ShallowRef<DeepPartial<T>>>` / `Readonly<ShallowRef<T | null>>`
  Solid  — `Accessor<DeepPartial<T>>` / `Accessor<T | null>`
  Svelte — `readonly partial: DeepPartial<T>` / `readonly final: T | null`
           (rune-backed getters)

Each hook is now generic on `TSchema extends SchemaInput | undefined`,
inferred from the `outputSchema` option. When omitted (default), the
return type is byte-identical to before; when supplied, `partial`/`final`
are added via a conditional `UseChatReturn<TTools, TSchema>` /
`CreateChatReturn<TTools, TSchema>`. The internal onChunk handler is the
same in all four — RUN_STARTED resets, TEXT_MESSAGE_CONTENT accumulates +
parses, CUSTOM structured-output.complete snaps final. User onChunk is
still invoked after the internal pass.

DeepPartial<T> is exported from each framework package.

Type-level tests in each package pin both branches of the discriminated
return type, mirroring the React variant — pure types, no renderer
required. Existing test suites pass on all three packages:

  ai-vue:    93 tests pass
  ai-solid:  103 tests pass
  ai-svelte: 56 tests pass
…alls

- structured-outputs.md "Client with useChat" section: add a "Rendering
  reasoning and tool calls" subsection explaining that those land on
  messages[…].parts (ThinkingPart, ToolCallPart, ToolResultPart) just
  like normal chat — no separate hook fields. Includes a render snippet
  showing how to hide the raw-JSON TextPart and let the structured view
  (partial/final) replace it.
- Note that useChat (React/Vue/Solid) and createChat (Svelte) all accept
  the same outputSchema option with the same semantics — only the
  reactivity primitive differs.
- Changeset: bump @tanstack/ai-vue, @tanstack/ai-solid, @tanstack/ai-svelte
  to minor alongside @tanstack/ai-react. Replaced the "React" section with
  a unified "Framework hooks" section covering all four packages and
  documenting the per-framework reactivity types.
… APIs

The previous draft of the streaming-with-tools-that-may-pause subsection
invented showApprovalPrompt / runClientTool / resumeWithToolResult
helpers. The actual flow uses the standard chat APIs, identical to a
non-structured chat:

- Server tools with needsApproval:true land on messages[...].parts as
  ToolCallPart with state === 'approval-requested'. Render approval UI
  from messages, respond via addToolApprovalResponse({ id, approved })
  from the hook return (see docs/tools/tool-approval).
- Client tools with execute() set run automatically via the
  ChatClient's onToolCall handler (chat-client.ts:198-233). For manual
  handling, use addToolResult({ toolCallId, tool, output, state }) —
  see docs/tools/client-tools.

Replaced the made-up code with a real example showing an approval-
gated tool inside a structured-output run, using addToolApprovalResponse
and rendering the prompt from messages.parts. The structured stream
layers on top of standard chat — no special pause-handling logic.
…utput orchestrator

Two runtime test files closing the highest-value gaps in the PR's
test coverage:

packages/typescript/ai-react/tests/use-chat-structured-output.test.ts (4 tests)
  - partial updates progressively from TEXT_MESSAGE_CONTENT deltas, final
    snaps on the terminal CUSTOM structured-output.complete event
  - state resets between runs via the stateful mock adapter (RUN_STARTED
    clears partial/final before the second run's deltas land)
  - user-supplied onChunk callback fires after internal tracking, with
    full visibility of the same chunks
  - useChat() without outputSchema doesn't track structured state — the
    internal handler's outputSchema-gate is a no-op

packages/typescript/ai/tests/chat-structured-output-stream.test.ts (6 tests)
  - native adapter.structuredOutputStream path: validated structured-
    output.complete event forwarded with parsed object, schema validation
    failure → RUN_ERROR { code: 'schema-validation' } and NO complete
    event is emitted, reasoning carries through validation onto the
    terminal event, TEXT_MESSAGE_CONTENT deltas pass through
  - fallbackStructuredOutputStream path (adapter lacks native streaming):
    synthesizes RUN_STARTED → TEXT_MESSAGE_* → structured-output.complete
    → RUN_FINISHED around the non-streaming structuredOutput call;
    schema validation failure on the fallback path also emits RUN_ERROR

Together: ai package 769 tests, ai-react 110, ai-vue 93, ai-solid 103,
ai-svelte 56 — all green.
The server-side adapter implementations of structuredOutputStream (shared
by ai-openai, ai-grok, ai-groq via inheritance) had zero unit coverage —
only the e2e suite exercised them. Two new focused test files close that
gap by stubbing the openai SDK client and verifying the AG-UI lifecycle,
request shape, error paths, and per-chunk debug logging.

tests/chat-completions-structured-output-stream.test.ts (6 tests)
  - happy path: RUN_STARTED → TEXT_MESSAGE_* → CUSTOM
    structured-output.complete (typed object + raw JSON) → RUN_FINISHED
  - request shape: stream: true + response_format: { type: 'json_schema',
    json_schema: { strict: true } }; tools are stripped
  - delta accumulation across multiple chunks produces exactly one
    structured-output.complete with the fully-parsed object
  - empty content → RUN_ERROR { code: 'empty-response' }, no
    structured-output.complete is emitted
  - malformed JSON → RUN_ERROR { code: 'parse-error' }
  - per-chunk logger.provider is called once per SDK chunk (verified via
    a spy logger threaded through resolveDebugOption)

tests/responses-structured-output-stream.test.ts (7 tests)
  - same matrix against the Responses API event shape
    (response.created / response.output_text.delta / response.completed)
  - request shape: stream: true + text.format: { type: 'json_schema',
    strict: true }; tools stripped
  - usage promoted from response.completed onto RUN_FINISHED
  - empty content / parse-error → RUN_ERROR with the correct code
  - response.refusal.delta → RUN_ERROR { code: 'refusal' } (Responses-
    only failure surface)
  - per-chunk logger.provider invocation

Stub adapters extend the base directly and pass a fake OpenAI client
whose chat.completions.create / responses.create routes into a per-test
mock — same pattern as the existing chat-completions-text.test.ts and
responses-text.test.ts suites.

openai-base test count: 70 → 83 (all passing). Types + lint clean.
Two failures from the previous push, both stemming from the type-test
files I added: knip flagged @standard-schema/spec as an unlisted import
across ai-react, ai-vue, ai-solid, and ai-svelte test files, and
@tanstack/ai-vue:test:types failed because — unlike the other three —
its tsconfig included tests/, so tsc strictly resolved the import (which
isn't a direct dep, only transitively via @tanstack/ai).

Fixes:

- Add \`@standard-schema/spec: ^1.1.0\` to devDependencies on all four
  framework packages. The import is purely for type-level construction
  in the type tests (StandardJSONSchemaV1<Person, Person> — a phantom
  branded type that simulates what a Zod schema's inferred type would
  look like). devDep is the right scope.
- Align ai-vue's tsconfig with ai-react/ai-solid/ai-svelte by dropping
  tests/ from the tsc include block. Tests are still type-checked by
  vitest at runtime; tsc now only checks src/.

Verified locally: pnpm test:knip, pnpm test:sherif, and test:types on
all four framework packages pass.
@AlemTuzlak AlemTuzlak merged commit 98979f7 into main May 14, 2026
10 checks passed
@AlemTuzlak AlemTuzlak deleted the 526-featai-openrouter-support-streaming-structured-output-response_format-json_schema-with-stream-true branch May 14, 2026 13:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat(ai-openrouter): support streaming structured output (response_format json_schema with stream: true)

3 participants