Add AI browser prompt api#33
Conversation
Greptile SummaryThis PR adds a Browser AI integration that routes requests to Chromium's on-device
Confidence Score: 4/5Safe to merge with the tool-result message fix applied; the rest of the integration is well-structured. One P1 bug in ui/src/service-worker/browser-ai.ts — Important Files Changed
Sequence DiagramsequenceDiagram
participant UI as WasmSetupGuard (Window)
participant Bridge as BrowserAiBridge (Window)
participant SW as Service Worker
participant LM as LanguageModel API
UI->>Bridge: installBrowserAiBridge()
UI->>LM: getAvailability()
LM-->>UI: downloadable | available | ...
Note over UI: User clicks Download
UI->>LM: lm.create({ monitor })
LM-->>UI: downloadprogress events (loaded in [0,1])
UI->>UI: setBrowserAi({ downloadProgress })
LM-->>UI: session (download complete)
UI->>SW: postMessage(BROWSER_AI_AVAILABILITY_CHANGED)
SW->>SW: invalidateAvailabilityCache()
UI->>UI: queryClient.invalidateQueries(/v1/models)
Note over SW: Fetch /v1/models
SW->>SW: augmentModelsResponse()
SW->>Bridge: postMessage(BROWSER_AI_REQUEST AVAILABILITY)
Bridge->>LM: getAvailability()
LM-->>Bridge: available
Bridge-->>SW: type AVAILABILITY state available
SW-->>UI: models list + browser/gemini-nano entry
Note over SW: Chat request to browser/* model
SW->>Bridge: postMessage(BROWSER_AI_REQUEST PROMPT)
Bridge->>LM: lm.create() + promptStreaming()
LM-->>Bridge: DELTA chunks
Bridge-->>SW: type DELTA text
LM-->>Bridge: done
Bridge-->>SW: type DONE inputTokens outputTokens
SW-->>UI: SSE stream / JSON response
Prompt To Fix All With AIThis is a comment left during a code review.
Path: ui/src/service-worker/browser-ai.ts
Line: 931-942
Comment:
**`role: "tool"` messages silently dropped in multi-turn tool use**
`chatMessagesToBridge` filters out any message whose role is not `"system"`, `"user"`, or `"assistant"`. In the OpenAI chat completions format, tool results are sent back by the caller as messages with `role: "tool"`. These are silently discarded, so after the model emits a tool call and the client sends the result, the follow-up prompt to `generateChatCompletionsToolModeResponse` contains no tool-result context — the model sees only the user's original message and will either repeat the tool call or produce a confused response.
The bridge's `LanguageModelMessage` only supports `"system" | "user" | "assistant"`, so tool-result turns need to be re-serialized as `"user"` messages with the same `<tool_result name="...">` markup that `inputToMessages` already emits for the Responses-API path.
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: ui/src/components/WasmSetup/WasmSetupGuard.tsx
Line: 409-416
Comment:
**Error catch always resets availability to `"downloadable"`**
On any download failure the catch block hard-codes `availability: "downloadable"`, but that's not necessarily the actual state. If the download fails because the device became ineligible mid-download (e.g. storage pressure), Chrome may now report `"unavailable"`. Setting `"downloadable"` in that case causes the Download button to reappear and immediately fail again on every attempt. Re-querying the real availability on failure and only falling back to `"downloadable"` if the query itself throws would avoid the stale state.
How can I resolve this? If you propose a fix, please make it concise.Reviews (2): Last reviewed commit: "Review fixes" | Re-trigger Greptile |
| delta: reply.text, | ||
| }); | ||
| return false; | ||
| } | ||
| if (reply.type === "DONE") { | ||
| inputTokens = reply.inputTokens; | ||
| outputTokens = reply.outputTokens; | ||
| return true; | ||
| } | ||
| if (reply.type === "ERROR") { | ||
| throw new Error(reply.message); | ||
| } |
There was a problem hiding this comment.
role: "tool" messages silently dropped in multi-turn tool use
chatMessagesToBridge filters out any message whose role is not "system", "user", or "assistant". In the OpenAI chat completions format, tool results are sent back by the caller as messages with role: "tool". These are silently discarded, so after the model emits a tool call and the client sends the result, the follow-up prompt to generateChatCompletionsToolModeResponse contains no tool-result context — the model sees only the user's original message and will either repeat the tool call or produce a confused response.
The bridge's LanguageModelMessage only supports "system" | "user" | "assistant", so tool-result turns need to be re-serialized as "user" messages with the same <tool_result name="..."> markup that inputToMessages already emits for the Responses-API path.
Prompt To Fix With AI
This is a comment left during a code review.
Path: ui/src/service-worker/browser-ai.ts
Line: 931-942
Comment:
**`role: "tool"` messages silently dropped in multi-turn tool use**
`chatMessagesToBridge` filters out any message whose role is not `"system"`, `"user"`, or `"assistant"`. In the OpenAI chat completions format, tool results are sent back by the caller as messages with `role: "tool"`. These are silently discarded, so after the model emits a tool call and the client sends the result, the follow-up prompt to `generateChatCompletionsToolModeResponse` contains no tool-result context — the model sees only the user's original message and will either repeat the tool call or produce a confused response.
The bridge's `LanguageModelMessage` only supports `"system" | "user" | "assistant"`, so tool-result turns need to be re-serialized as `"user"` messages with the same `<tool_result name="...">` markup that `inputToMessages` already emits for the Responses-API path.
How can I resolve this? If you propose a fix, please make it concise.
No description provided.