fix(ai-chat): stop 100% CPU freeze during streaming (#1205)#1221
Merged
Conversation
…er from message blocks (#1205)
…arser, text/tool boundary)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Fixes #1205. The AI chat panel pegged the main thread at 100% CPU and froze the app whenever a response streamed in. The cascade per 150 ms flush:
existing + textrebuilt the assistant turn's text block (O(n²) over the response),plainTextwas recomputed from blocks on every observer tick, and the result fed intoMarkdown(text)keyed by\.offset, which made MarkdownUI re-parse the full markdown tree every flush. Schema prefetch and per-turn attachment resolution also ran on the main actor.@ObservationIgnoredbuffer on the view model, published through astreamingTickcounter so only the streaming bubble re-renders per tokenplainTextis now a cached stored property maintained byappendText, not recomputed from blocksChatContentBlockis a struct with a stableUUIDid; the message-viewForEachkeys on it, so finalized blocks are not re-created when a sibling changesAIStreamingBubbleView) and renders plainText;Markdownonly runs once a block is committed at end-of-stream or tool boundaryonChange(of: messages.last?.plainText)scroll trigger is gone;.defaultScrollAnchor(.bottom)+ themessages.counttrigger handle auto-scrollprepTaskis nowTask.detached(priority: .userInitiated); only hops to MainActor forcapturePromptContextandmessages.dropLast()snapshotsCHANGELOG entry added under Fixed.
Test plan
AIChatViewModelMentionsTestspasses (9/9)AnthropicProviderParserTests,GeminiProviderParserTests,OpenAICompatibleProviderParserTestspassInclude schemaon, confirm response streams without UI hang and final text renders as MarkdownKnown follow-up
There is a separate UX issue where, after an Agent-mode tool roundtrip, a fresh assistant turn can be left with empty blocks if the model produces no text after the tool call — the message then sticks on the typing indicator. That is pre-existing (same path existed before this refactor) and is out of scope for this PR; it deserves its own fix to either prune the empty turn or render an explicit "completed" state.