Skip to content

Fix Docker provider fallback and auth error handling#168

Open
friuns2 wants to merge 16 commits into
mainfrom
codex/runtime-zen-no-persist
Open

Fix Docker provider fallback and auth error handling#168
friuns2 wants to merge 16 commits into
mainfrom
codex/runtime-zen-no-persist

Conversation

@friuns2
Copy link
Copy Markdown
Owner

@friuns2 friuns2 commented May 13, 2026

Summary

  • keep no-auth Docker startup on runtime OpenCode Zen fallback without persisting config
  • load provider models before Codex model list so Zen/OpenRouter model selectors are current
  • surface invalid/expired auth failures as persisted chat errors with Send feedback
  • hide duplicate live error overlay once the failed turn is persisted

Verification

  • pnpm test:unit src/api/normalizers/v2.test.ts src/server/codexAppServerBridge.archive.test.ts
  • pnpm run build
  • Docker browser regression: no-auth Zen reply, provider switch Zen -> OpenRouter reply, expired/invalid auth 401 chat error + feedback after reload, malformed auth fallback to Zen

Screenshots

  • output/playwright/full-noauth-zen-reply.png
  • output/playwright/full-provider-switch-reply.png
  • output/playwright/full-invalid-auth-error-light.png
  • output/playwright/full-invalid-auth-error-dark.png
  • output/playwright/full-malformed-auth-fallback.png

Summary by CodeRabbit

  • New Features

    • "Send feedback" link for turn-level failures; settings show only visible feedback errors
    • Provider-first model loading with OpenCode Zen fallback, provider-scoped new-thread model persistence
    • Copied auth.json now promotes to active account; dark-mode styling for turn-error feedback
  • Bug Fixes

    • Treat transient first-turn materialization as in-progress (no premature error)
    • Prevent stale/empty provider selection and duplicate "Thinking" overlays; reliable turn-error persistence/display
  • Documentation

    • Docker auth/provider-model guidance, validation steps, and regression checklist
  • Tests

    • New unit and integration tests covering provider-models, free-mode behavior, and turn-error handling

Review Change Stack

@friuns2
Copy link
Copy Markdown
Owner Author

friuns2 commented May 13, 2026

/review

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 13, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

This PR implements OpenCode Zen runtime fallback for unauthenticated Docker, prioritizes /codex-api/provider-models over Codex model/list with fallback rules, maps first-turn thread materialization errors to in-progress empty turns, surfaces per-turn stream failures into thread turns and UI with a "Send feedback" action, renames free-mode persistence to webui-custom-providers.json, and updates docs/tests.

Changes

OpenCode Zen Docker Auth, Provider Models, and Turn Error Handling

Layer / File(s) Summary
Documentation and validation instructions
llm-wiki/raw/fixes/opencode-zen-docker-auth-provider-models.md, llm-wiki/raw/fixes/copied-auth-provider-promotion.md, llm-wiki/wiki/concepts/opencode-zen-big-pickle.md, llm-wiki/wiki/entities/codex-web-local.md, llm-wiki/wiki/index.md, llm-wiki/wiki/log.md, llm-wiki/wiki/overview.md, tests.md, AGENTS.md, whatToTest.md
Adds fix write-ups, concept/entity/index/log/overview updates, Docker/manual validation steps, a Docker Provider/Auth Regression Workflow, manual test checklists, and a whatToTest placeholder documenting verification commands.
Provider model loading and gateway changes
src/api/codexGateway.ts, src/api/codexGateway.test.ts
Adds ProviderModelsResponse.exclusive, implements fetchProviderModelIds() (abort timeout, safe JSON, trim/dedupe), refactors getAvailableModelIds() to prefer provider models, support exclusive/require semantics, and merge/fallback to model/list; tests cover required vs optional flows.
Server bridge: stream parsing, turn-error merging, materialization handling
src/server/codexAppServerBridge.ts, src/server/codexAppServerBridge.archive.test.ts
Improves error extraction, parses buffered app-server stream frames to extract per-turn failures and inject them into thread.turns, exports isThreadMaterializationPendingError(), applies merging to RPC/thread endpoints, and maps materialization-pending thread/read to 200 with conversationState.turns: [] and isInProgress: true; tests added/updated.
Free-mode rename, suppression, and App.vue integration
src/server/freeMode.ts, src/server/freeMode.test.ts, src/App.vue
Renames persisted free-mode file to webui-custom-providers.json, adds shouldSuppressCommunityFreeModeForCodexAuth(), exports ensureDefaultFreeModeStateForMissingAuthSync() (no synchronous writes), adds hasCodexAuth to free-mode status, integrates one-time external auth import and provider-change refresh in App.vue, and updates tests.
Turn error normalization and tests
src/api/normalizers/v2.ts, src/api/normalizers/v2.test.ts
Adds readTurnErrorText() and appends system turnError UI messages for failed turns in normalizeThreadMessagesV2(); unit tests verify emitted turnError message and metadata.
Turn error rendering and feedback capability
src/components/content/ThreadConversation.vue, src/style.css
Renders a conditional "Send feedback" link for turnError messages, wires prepareTurnErrorFeedback() to record visible failure and update mailto href, adds isTurnErrorMessage() predicate, and dark-mode styling for .turn-error-feedback.
Persisted overlay suppression and provider-scoped persistence
src/composables/useDesktopState.ts, src/composables/useDesktopState.test.ts
selectedLiveOverlay compares live turnError with the most recent persisted turnError and clears errorText only on exact match; provider-scoped new-thread model read/set behavior adjusted; tests validate overlay show/suppress behavior and provider-scoped localStorage persistence.
Bridge & archive tests
src/server/codexAppServerBridge.archive.test.ts, src/server/freeMode.test.ts, src/composables/useDesktopState.test.ts, src/api/codexGateway.test.ts, src/api/normalizers/v2.test.ts
Adds/updates unit and archive tests covering materialization detection, default free-mode behaviors, model discovery flows, turnError emission, live overlay suppression, and provider-scoped persistence.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐰 A Zen fallback blooms in Docker's night,
First turns now pause until their state takes flight,
Provider models lead the chooser's way,
Turn errors show a gentle "Send feedback" stay,
The rabbit packs the patch and hops away.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main changes: Docker provider fallback behavior and authentication error handling improvements across multiple components.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch codex/runtime-zen-no-persist

Comment @coderabbitai help to get the list of available commands and usage tips.

@qodo-free-for-open-source-projects
Copy link
Copy Markdown

qodo-free-for-open-source-projects Bot commented May 13, 2026

Code Review by Qodo

🐞 Bugs (3) 📘 Rule violations (0)

Grey Divider


Action required

1. Turn error ID collision ✓ Resolved 🐞 Bug ≡ Correctness
Description
normalizeThreadMessagesV2 builds the synthetic turn-error message id with turnId ?? ..., so an
empty-string turn.id produces "-error" and causes Vue v-for key collisions. This can break
conversation rendering when multiple failed turns are present.
Code

src/api/normalizers/v2.ts[R639-648]

+    const errorText = readTurnErrorText(turn)
+    if (turn.status === 'failed' && errorText) {
+      messages.push({
+        id: `${turnId ?? `turn-${turnIndex}`}-error`,
+        role: 'system',
+        text: errorText,
+        messageType: 'turnError',
+        turnId,
+        turnIndex,
+      })
Evidence
The normalizer assigns turnId from turn.id without trimming/emptiness checks and then uses it
directly in the message id with ??, so "" yields "-error". The UI renders message lists keyed
by message.id, so duplicate ids are unsafe, and the same normalizer file shows empty ids are
treated as invalid in other paths (trim-length check).

src/api/normalizers/v2.ts[626-649]
src/components/content/ThreadConversation.vue[12-26]
src/api/normalizers/v2.ts[659-667]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`normalizeThreadMessagesV2()` treats `turn.id === ""` as a valid id because it uses a nullish-coalescing fallback (`turnId ?? ...`). That can generate identical message ids like `"-error"`, which then collide with Vue’s `:key="message.id"` rendering and cause incorrect DOM reuse.
## Issue Context
The codebase already treats empty/whitespace turn ids as invalid elsewhere (e.g., when deriving `activeTurnId`), so the normalizer should also treat empty/whitespace ids as absent.
## Fix Focus Areas
- src/api/normalizers/v2.ts[626-649]
- src/api/normalizers/v2.ts[659-667]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Wiki log entry not appended ✓ Resolved 📘 Rule violation ◔ Observability
Description
The new ingest entry in llm-wiki/wiki/log.md was inserted at the top of the file instead of being
appended, breaking the append-only audit trail requirement. This can undermine traceability
expectations for wiki ingest operations.
Code

llm-wiki/wiki/log.md[R3-8]

+## [2026-05-13] ingest | OpenCode Zen Docker auth and provider models
+- Added source: `raw/fixes/opencode-zen-docker-auth-provider-models.md`.
+- Updated wiki page: `concepts/opencode-zen-big-pickle.md`.
+- Documents: no-auth Zen runtime fallback, auth-mounted Docker switching back to Codex defaults, first-turn materialization as a transient live-state condition, provider-model-first loading, and the build-time Docker install workaround for runtime `pnpm dlx` OOM risk.
+- Updated `overview.md`, `entities/codex-web-local.md`, and `index.md`.
+
Evidence
PR Compliance ID 7 requires wiki ingest operations to append exactly one new entry to
llm-wiki/wiki/log.md and keep it append-only. The diff shows the new 2026-05-13 entry inserted at
the top of the log (right-side lines 3-8), indicating a non-append edit.

AGENTS.md
llm-wiki/wiki/log.md[3-8]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`llm-wiki/wiki/log.md` must be append-only, but the new ingest entry was prepended near the top of the file.
## Issue Context
Compliance requires that ingest operations append exactly one new entry to the end of `llm-wiki/wiki/log.md` without rewriting existing history.
## Fix Focus Areas
- llm-wiki/wiki/log.md[3-8]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Overlay hides new errors ✓ Resolved 🐞 Bug ≡ Correctness
Description
In useDesktopState.selectedLiveOverlay, errorText is blanked whenever any persisted message in the
thread is a turnError, which can suppress the live overlay for a subsequent failed turn until the
new failure is persisted (or forever if message loading fails). This can leave the user with no
visible error for the new failure during the persistence gap.
Code

src/composables/useDesktopState.ts[R1524-1526]

+    const hasPersistedTurnError = !isInProgress && (persistedMessagesByThreadId.value[threadId] ?? [])
+      .some((message) => message.messageType === 'turnError')
+    const errorText = hasPersistedTurnError ? '' : (turnErrorByThreadId.value[threadId]?.message ?? '').trim()
Evidence
The overlay suppression is based on the existence of any persisted turnError, while
turnErrorByThreadId is updated for each new failure and persistedMessages are updated asynchronously
via loadMessages/getThreadDetail. This creates a window where a new failure’s live overlay can be
suppressed simply because an older persisted turnError exists.

src/composables/useDesktopState.ts[1515-1527]
src/composables/useDesktopState.ts[3694-3705]
src/composables/useDesktopState.ts[4208-4271]
src/api/normalizers/v2.ts[626-649]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`selectedLiveOverlay` suppresses `errorText` if *any* persisted `turnError` exists in the thread, which can hide the live overlay error for later failures until `loadMessages()` persists the new error (and if that fetch fails, the new error may never be shown).
### Issue Context
- `turnErrorByThreadId` is set immediately on `turn/completed` failures.
- Persisted `turnError` messages are only updated after `getThreadDetail()`/`loadMessages()` returns normalized thread history.
- Current suppression condition does not correlate the persisted `turnError` to the active failure.
### Fix Focus Areas
- src/composables/useDesktopState.ts[1515-1534]
### Fix approach
Update the suppression logic to hide the live overlay only when the persisted history already contains the *same* failure (ideally same turnId/turnIndex, or at least the most-recent persisted turnError matching the live error message). For example:
- Compute `liveErrorText` from `turnErrorByThreadId`.
- Find the most recent persisted `turnError` (by message order / highest `turnIndex`).
- Suppress the overlay only if `!isInProgress` AND `liveErrorText` matches that most-recent persisted `turnError` (and/or its `turnId`).
This preserves the “no duplicate overlay after persistence” goal without hiding new failures when an older turn error exists.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

4. Timeout signal breaks models 🐞 Bug ☼ Reliability ⭐ New
Description
fetchProviderModelIds() uses AbortSignal.timeout() in browser code; if that API is unavailable, the
provider-model fetch fails and getAvailableModelIds() returns an empty list for provider-backed
modes (requireProviderModels=true), leaving the model selector with no choices.
Code

src/api/codexGateway.ts[R1891-1924]

+async function fetchProviderModelIds(): Promise<{ ids: string[], exclusive: boolean } | null> {
  try {
    const response = await fetch('/codex-api/provider-models', {
      signal: AbortSignal.timeout(PROVIDER_MODELS_FETCH_TIMEOUT_MS),
    })
-    let providerPayload: (ProviderModelsResponse & { exclusive?: boolean }) | null = null
+    let providerPayload: ProviderModelsResponse | null = null
    try {
-      providerPayload = await response.json() as ProviderModelsResponse & { exclusive?: boolean }
+      providerPayload = await response.json() as ProviderModelsResponse
    } catch {
      providerPayload = null
    }

    if (response.ok && Array.isArray(providerPayload?.data)) {
-      sawProviderModels = true
-      if (providerPayload.exclusive) {
-        return providerPayload.data.filter((c): c is string => typeof c === 'string' && c.trim().length > 0)
-      }
-      for (const candidate of providerPayload.data) {
-        if (typeof candidate !== 'string') continue
-        const normalized = candidate.trim()
-        if (!normalized || ids.includes(normalized)) continue
-        ids.push(normalized)
+      return {
+        ids: providerPayload.data
+          .map((candidate) => typeof candidate === 'string' ? candidate.trim() : '')
+          .filter((candidate, index, candidates): candidate is string =>
+            candidate.length > 0 && candidates.indexOf(candidate) === index),
+        exclusive: providerPayload.exclusive === true,
      }
    }
  } catch {
    // Keep Codex usable when the provider-models endpoint is unavailable.
  }
+  return null
+}
+
+export async function getAvailableModelIds(options: { includeProviderModels?: boolean; requireProviderModels?: boolean } = {}): Promise<string[]> {
+  const shouldIncludeProviderModels = options.includeProviderModels !== false
+  const providerModels = shouldIncludeProviderModels ? await fetchProviderModelIds() : null
+
+  if (providerModels?.exclusive || options.requireProviderModels) {
+    return providerModels?.ids ?? []
+  }
Evidence
The provider-model fetch hard-depends on AbortSignal.timeout, and provider-backed providers set
requireProviderModels: true, which makes getAvailableModelIds return [] when provider models
are unavailable; the same module demonstrates a compatibility-safe timeout pattern using
AbortController.

src/api/codexGateway.ts[1891-1924]
src/composables/useDesktopState.ts[1878-1890]
src/api/codexGateway.ts[3375-3395]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`fetchProviderModelIds()` passes `signal: AbortSignal.timeout(...)`. In runtimes where `AbortSignal.timeout` is not defined, this throws and is caught, but `getAvailableModelIds()` then returns `[]` when `requireProviderModels` is true, producing an empty model list for provider-backed providers.

## Issue Context
The codebase already uses an AbortController+setTimeout pattern elsewhere (e.g., file upload), so adopting the same pattern here avoids a hard dependency on `AbortSignal.timeout`.

## Fix Focus Areas
- src/api/codexGateway.ts[1891-1924]
- src/api/codexGateway.ts[3375-3395]
- src/composables/useDesktopState.ts[1878-1890]

## Suggested fix
- Replace `AbortSignal.timeout(...)` usage with a local `AbortController` and `setTimeout(() => controller.abort(), PROVIDER_MODELS_FETCH_TIMEOUT_MS)`.
- Ensure the timeout is cleared in a `finally` block.
- Keep the existing behavior (return `null` on failures), but avoid the failure being caused by missing timeout API support.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. Provider ID migration missing 🐞 Bug ≡ Correctness ⭐ New
Description
Runtime free-mode config now uses underscore provider IDs (e.g., opencode_zen/openrouter_free), but
useDesktopState persists provider-scoped model choices under keys derived directly from
config/read’s model_provider; this will stop restoring any existing saved selections written under
the previous hyphenated provider IDs.
Code

src/server/freeMode.ts[R238-290]

+    const providerConfigKey = `model_providers.${OPENCODE_ZEN_RUNTIME_PROVIDER_ID}`
    const baseUrl = serverPort
      ? `http://127.0.0.1:${serverPort}/codex-api/zen-proxy/v1`
      : OPENCODE_ZEN_BASE_URL
    const wireApi = serverPort ? 'responses' : (state.wireApi || 'chat')
    const authArgs: string[] = serverPort
-      ? ['-c', `model_providers.${OPENCODE_ZEN_PROVIDER_ID}.experimental_bearer_token="zen-proxy-token"`]
-      : ['-c', `model_providers.${OPENCODE_ZEN_PROVIDER_ID}.env_key="OPENCODE_ZEN_API_KEY"`]
+      ? ['-c', `${providerConfigKey}.experimental_bearer_token="zen-proxy-token"`]
+      : ['-c', `${providerConfigKey}.env_key="OPENCODE_ZEN_API_KEY"`]
    return [
      '-c', `model="${model}"`,
-      '-c', `model_provider="${OPENCODE_ZEN_PROVIDER_ID}"`,
-      '-c', `model_providers.${OPENCODE_ZEN_PROVIDER_ID}.name="OpenCode Zen"`,
-      '-c', `model_providers.${OPENCODE_ZEN_PROVIDER_ID}.base_url="${baseUrl}"`,
-      '-c', `model_providers.${OPENCODE_ZEN_PROVIDER_ID}.wire_api="${wireApi}"`,
+      '-c', `model_provider="${OPENCODE_ZEN_RUNTIME_PROVIDER_ID}"`,
+      '-c', `${providerConfigKey}.name="OpenCode Zen"`,
+      '-c', `${providerConfigKey}.base_url="${baseUrl}"`,
+      '-c', `${providerConfigKey}.wire_api="${wireApi}"`,
      ...authArgs,
    ]
  }

  if (state.provider === 'custom' && state.customBaseUrl) {
+    const providerConfigKey = `model_providers.${CUSTOM_RUNTIME_PROVIDER_ID}`
    const baseUrl = serverPort
      ? `http://127.0.0.1:${serverPort}/codex-api/custom-proxy/v1`
      : state.customBaseUrl
    const wireApi = serverPort ? 'responses' : (state.wireApi || 'responses')
    const authArgs: string[] = serverPort
-      ? ['-c', `model_providers.${CUSTOM_PROVIDER_ID}.experimental_bearer_token="custom-proxy-token"`]
-      : ['-c', `model_providers.${CUSTOM_PROVIDER_ID}.env_key="CUSTOM_ENDPOINT_API_KEY"`]
+      ? ['-c', `${providerConfigKey}.experimental_bearer_token="custom-proxy-token"`]
+      : ['-c', `${providerConfigKey}.env_key="CUSTOM_ENDPOINT_API_KEY"`]
    const modelArgs: string[] = state.model?.trim()
      ? ['-c', `model="${state.model.trim()}"`]
      : []
    return [
      ...modelArgs,
-      '-c', `model_provider="${CUSTOM_PROVIDER_ID}"`,
-      '-c', `model_providers.${CUSTOM_PROVIDER_ID}.name="Custom Endpoint"`,
-      '-c', `model_providers.${CUSTOM_PROVIDER_ID}.base_url="${baseUrl}"`,
-      '-c', `model_providers.${CUSTOM_PROVIDER_ID}.wire_api="${wireApi}"`,
+      '-c', `model_provider="${CUSTOM_RUNTIME_PROVIDER_ID}"`,
+      '-c', `${providerConfigKey}.name="Custom Endpoint"`,
+      '-c', `${providerConfigKey}.base_url="${baseUrl}"`,
+      '-c', `${providerConfigKey}.wire_api="${wireApi}"`,
      ...authArgs,
    ]
  }

  if (!state.apiKey) return []
+  const providerConfigKey = `model_providers.${FREE_MODE_RUNTIME_PROVIDER_ID}`
  const baseUrl = serverPort
    ? `http://127.0.0.1:${serverPort}/codex-api/openrouter-proxy/v1`
    : FREE_MODE_BASE_URL
  const bearerToken = serverPort ? 'openrouter-proxy-token' : state.apiKey
  return [
    '-c', `model="${state.model}"`,
-    '-c', `model_provider="${FREE_MODE_PROVIDER_ID}"`,
-    '-c', `model_providers.${FREE_MODE_PROVIDER_ID}.name="OpenRouter Free"`,
-    '-c', `model_providers.${FREE_MODE_PROVIDER_ID}.base_url="${baseUrl}"`,
-    '-c', `model_providers.${FREE_MODE_PROVIDER_ID}.wire_api="responses"`,
-    '-c', `model_providers.${FREE_MODE_PROVIDER_ID}.experimental_bearer_token="${bearerToken}"`,
+    '-c', `model_provider="${FREE_MODE_RUNTIME_PROVIDER_ID}"`,
+    '-c', `${providerConfigKey}.name="OpenRouter Free"`,
+    '-c', `${providerConfigKey}.base_url="${baseUrl}"`,
+    '-c', `${providerConfigKey}.wire_api="responses"`,
+    '-c', `${providerConfigKey}.experimental_bearer_token="${bearerToken}"`,
Evidence
freeMode now emits underscore-safe model_provider IDs, while useDesktopState builds provider-scoped
localStorage keys from the providerId string without any mapping; therefore previously stored keys
under hyphenated IDs will no longer match.

src/server/freeMode.ts[233-291]
src/composables/useDesktopState.ts[205-218]
src/composables/useDesktopState.ts[1926-1931]
src/api/codexGateway.ts[1942-1948]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Provider-scoped model persistence uses `config/read`'s `model_provider` string as the provider context ID. This PR changes the runtime provider IDs used in `-c model_provider=...` to underscore-safe variants, which will orphan previously persisted provider-scoped selections stored under the old hyphenated IDs.

## Issue Context
- Runtime config args now emit `model_provider="opencode_zen"` / `openrouter_free` / `custom_endpoint`.
- The UI stores/reads provider-scoped model IDs under `__new-thread-provider__::<providerId>` with no aliasing between hyphen/underscore forms.

## Fix Focus Areas
- src/server/freeMode.ts[233-291]
- src/composables/useDesktopState.ts[205-218]
- src/composables/useDesktopState.ts[1926-1931]

## Suggested fix
Implement a small compatibility layer for provider context IDs, for example:
- When computing the provider context key, canonicalize provider IDs (e.g., map known runtime IDs `opencode_zen -> opencode-zen`, `openrouter_free -> openrouter-free`, `custom_endpoint -> custom-endpoint`, or vice-versa).
- When reading provider-scoped selections, try both canonical and legacy variants.
- Optionally, migrate stored keys on load by rewriting the localStorage map once (copy legacy entries to the canonical key and remove the legacy key).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


6. Docs use wrong providerId ✓ Resolved 🐞 Bug ⚙ Maintainability
Description
The new Docker regression workflow asserts model_provider="opencode-zen", but the runtime
free-mode config args now emit underscore-safe provider ids (e.g. model_provider="opencode_zen").
This will cause manual Docker verification to fail expected/actual checks even when the code is
working correctly.
Code

AGENTS.md[R199-203]

+- Start fresh isolated containers on unique localhost ports for at least these cases:
+  - no auth file: no `/codex-home/auth.json`; expect runtime OpenCode Zen fallback, `model_provider="opencode-zen"`, `model="big-pickle"`, send `hi`, wait for an assistant reply.
+  - invalid/expired auth file: mount an `auth.json` with token fields containing invalid/expired strings; expect Codex provider path, send `hi`, wait for final 401/auth error rendered in chat, verify `Send feedback`, reload the thread, verify the error persists, and verify no duplicate live `Thinking` overlay remains after persistence.
+  - malformed auth file: mount invalid JSON as `/codex-home/auth.json`; expect it to be treated as unusable auth and fall back to Zen, then send `hi` and wait for a reply.
+  - provider switch: start from OpenCode Zen, send `hi` and wait for a reply, switch the Provider settings selector to OpenRouter (do not change the model dropdown directly), send `hi` again and wait for a reply.
Evidence
AGENTS.md’s newly added Docker workflow expects model_provider="opencode-zen", but the runtime
config builder sets model_provider using underscore-safe ids (opencode_zen / openrouter_free).
Therefore the documented expected config/read values are out of sync with the code that actually
produces those values.

AGENTS.md[191-203]
src/server/freeMode.ts[233-253]
src/server/freeMode.ts[278-291]
tests.md[5362-5368]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Documentation added/updated in this PR still expects hyphenated runtime provider ids (e.g. `opencode-zen`) in `config/read`, but the server now configures runtime providers with underscore-safe ids (e.g. `opencode_zen`, `openrouter_free`). This mismatch makes the documented Docker regression workflow misleading.
## Issue Context
The runtime ids come from `getFreeModeConfigArgs()` which now sets `model_provider` to `opencode_zen` / `openrouter_free` and uses matching `model_providers.<id>` keys.
## Fix Focus Areas
- AGENTS.md[191-204]
- src/server/freeMode.ts[233-253]
- src/server/freeMode.ts[278-291]
- tests.md[5352-5370]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (3)
7. Overlay dedup normalization mismatch ✓ Resolved 🐞 Bug ≡ Correctness
Description
useDesktopState.selectedLiveOverlay compares a whitespace-collapsed persisted turn error against a
trim-only live error, so the same error text can fail equality when it contains newlines/multiple
spaces and the live overlay won’t be suppressed after persistence.
Code

src/composables/useDesktopState.ts[R1524-1537]

+    const liveErrorText = (turnErrorByThreadId.value[threadId]?.message ?? '').trim()
+    let latestPersistedTurnErrorText = ''
+    if (!isInProgress && liveErrorText) {
+      const persistedMessages = persistedMessagesByThreadId.value[threadId] ?? []
+      for (let index = persistedMessages.length - 1; index >= 0; index -= 1) {
+        const message = persistedMessages[index]
+        if (message.messageType !== 'turnError') continue
+        latestPersistedTurnErrorText = normalizeMessageText(message.text)
+        break
+      }
+    }
+    const errorText =
+      !isInProgress && liveErrorText && latestPersistedTurnErrorText === liveErrorText
+        ? ''
Evidence
The persisted error text is normalized with normalizeMessageText (collapsing \s+ to a single
space), while the live error is only trimmed; the de-dup equality check uses these mismatched forms.

src/composables/useDesktopState.ts[1515-1538]
src/composables/useDesktopState.ts[714-716]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`selectedLiveOverlay` normalizes persisted `turnError` text via `normalizeMessageText()` (collapses all whitespace), but compares it to `liveErrorText` which is only `.trim()`’d. If the same error contains internal whitespace/newlines, the comparison can fail and the live overlay remains visible even after the error has been persisted into chat history.
## Issue Context
This logic is intended to hide the live error overlay once the *same* failure is persisted as a `turnError` message.
## Fix Focus Areas
- Apply the same normalization on the live error for equality checks (or compare raw/trimmed on both sides).
- Preserve the original live error string for display if you don’t want to collapse newlines in the overlay.
### Suggested approach
Keep two variables:
- `rawLiveErrorText` for UI display
- `normalizedLiveErrorText = normalizeMessageText(rawLiveErrorText)` for comparison
Then compare `normalizedLiveErrorText` to `latestPersistedTurnErrorText`.
## Fix Focus Areas
- src/composables/useDesktopState.ts[1515-1538]
- src/composables/useDesktopState.ts[714-716]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


8. Redundant config/read on fallback 🐞 Bug ➹ Performance
Description
When resolveCollaborationModeSettings needs a fallback model, it calls getCurrentModelConfig()
(config/read) and then getAvailableModelIds(), which now fetches /codex-api/provider-models first;
that endpoint may call appServer.rpc('config/read') again, adding an extra round-trip on the
fallback-model selection path.
Code

src/api/codexGateway.ts[R1917-1923]

+export async function getAvailableModelIds(options: { includeProviderModels?: boolean; requireProviderModels?: boolean } = {}): Promise<string[]> {
+  const shouldIncludeProviderModels = options.includeProviderModels !== false
+  const providerModels = shouldIncludeProviderModels ? await fetchProviderModelIds() : null
+
+  if (providerModels?.exclusive || options.requireProviderModels) {
+    return providerModels?.ids ?? []
+  }
Evidence
The fallback-model resolver calls getCurrentModelConfig() then getAvailableModelIds();
getAvailableModelIds() now fetches /codex-api/provider-models first, and the server’s
provider-models endpoint falls back to readProviderBackedModelIds() which begins with
appServer.rpc('config/read'). This creates an avoidable second config/read round-trip.

src/api/codexGateway.ts[1667-1702]
src/api/codexGateway.ts[1917-1923]
src/api/codexGateway.ts[1941-1947]
src/server/codexAppServerBridge.ts[6736-6786]
src/server/codexAppServerBridge.ts[1259-1266]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`resolveCollaborationModeSettings()` already calls `getCurrentModelConfig()` (RPC `config/read`). If it needs a fallback model, it then calls `getAvailableModelIds()` with default options. After this PR, `getAvailableModelIds()` calls `/codex-api/provider-models` first, and the server-side `/codex-api/provider-models` handler can call `appServer.rpc('config/read')` again via `readProviderBackedModelIds()`, duplicating `config/read` work on a critical path.
## Issue Context
This impacts the “no explicit model, no configured model” path, where the UI is trying to quickly find *any* usable model to start a turn.
## Fix Focus Areas
- For the fallback-model path in `resolveCollaborationModeSettings`, call `getAvailableModelIds({ includeProviderModels: false })` so it goes directly to `model/list`.
- Keep provider-models-first behavior for the model selector / provider-backed contexts (where it matters).
## Fix Focus Areas
- src/api/codexGateway.ts[1667-1702]
- src/api/codexGateway.ts[1917-1933]
- src/server/codexAppServerBridge.ts[6736-6786]
- src/server/codexAppServerBridge.ts[1259-1266]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


9. Merge before trim overhead ✓ Resolved 🐞 Bug ➹ Performance
Description
In the /codex-api/rpc handler, mergeStreamTurnErrorsIntoThreadResult maps over the full turns array
before trimThreadTurnsInRpcResult slices it, causing avoidable O(totalTurns) work and allocations
for large threads when the response ultimately returns only the last THREAD_RESPONSE_TURN_LIMIT
turns.
Code

src/server/codexAppServerBridge.ts[R6323-6326]

+        const errorMergedResult = THREAD_METHODS_WITH_TURNS.has(body.method)
+          ? mergeStreamTurnErrorsIntoThreadResult(appServer, rpcResult)
+          : rpcResult
+        const trimmedResult = trimThreadTurnsInRpcResult(body.method, errorMergedResult)
Evidence
The RPC handler merges stream errors first, and the merge implementation maps over every turn. The
trim function then slices down to a fixed limit, making the pre-trim full-array map unnecessary work
for large turn lists.

src/server/codexAppServerBridge.ts[6281-6330]
src/server/codexAppServerBridge.ts[970-1002]
src/server/codexAppServerBridge.ts[887-904]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`/codex-api/rpc` applies `mergeStreamTurnErrorsIntoThreadResult()` before `trimThreadTurnsInRpcResult()`, so it may iterate and clone a large `thread.turns` array even though most turns are immediately dropped.
### Issue Context
- `mergeStreamTurnErrorsIntoThreadResult` does `turns.map(...)` over the entire returned turns list.
- `trimThreadTurnsInRpcResult` exists to cap turns to `THREAD_RESPONSE_TURN_LIMIT`.
- In `/codex-api/rpc`, the returned payload is what matters; merging errors for turns that will be trimmed away provides no value.
### Fix Focus Areas
- src/server/codexAppServerBridge.ts[6281-6330]
- src/server/codexAppServerBridge.ts[887-904]
- src/server/codexAppServerBridge.ts[970-1012]
### Fix approach
In the `/codex-api/rpc` path, reorder processing so trimming happens before merging stream errors:
1) `const trimmedResult = trimThreadTurnsInRpcResult(body.method, rpcResult)`
2) if method has turns, run `mergeStreamTurnErrorsIntoThreadResult(appServer, trimmedResult)`
3) continue with sanitization/skill input merge.
This confines merging work to the slice that will actually be returned to the client.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

Previous review results

Review updated until commit b51f44e

Results up to commit d939a17


🐞 Bugs (3) 📘 Rule violations (1)


Action required
1. Turn error ID collision 🐞 Bug ≡ Correctness ⭐ New
Description
normalizeThreadMessagesV2 builds the synthetic turn-error message id with turnId ?? ..., so an
empty-string turn.id produces "-error" and causes Vue v-for key collisions. This can break
conversation rendering when multiple failed turns are present.
Code

src/api/normalizers/v2.ts[R639-648]

+    const errorText = readTurnErrorText(turn)
+    if (turn.status === 'failed' && errorText) {
+      messages.push({
+        id: `${turnId ?? `turn-${turnIndex}`}-error`,
+        role: 'system',
+        text: errorText,
+        messageType: 'turnError',
+        turnId,
+        turnIndex,
+      })
Evidence
The normalizer assigns turnId from turn.id without trimming/emptiness checks and then uses it
directly in the message id with ??, so "" yields "-error". The UI renders message lists keyed
by message.id, so duplicate ids are unsafe, and the same normalizer file shows empty ids are
treated as invalid in other paths (trim-length check).

src/api/normalizers/v2.ts[626-649]
src/components/content/ThreadConversation.vue[12-26]
src/api/normalizers/v2.ts[659-667]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`normalizeThreadMessagesV2()` treats `turn.id === ""` as a valid id because it uses a nullish-coalescing fallback (`turnId ?? ...`). That can generate identical message ids like `"-error"`, which then collide with Vue’s `:key="message.id"` rendering and cause incorrect DOM reuse.

## Issue Context
The codebase already treats empty/whitespace turn ids as invalid elsewhere (e.g., when deriving `activeTurnId`), so the normalizer should also treat empty/whitespace ids as absent.

## Fix Focus Areas
- src/api/normalizers/v2.ts[626-649]
- src/api/normalizers/v2.ts[659-667]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Wiki log entry not appended 📘 Rule violation ◔ Observability
Description
The new ingest entry in llm-wiki/wiki/log.md was inserted at the top of the file instead of being
appended, breaking the append-only audit trail requirement. This can undermine traceability
expectations for wiki ingest operations.
Code

llm-wiki/wiki/log.md[R3-8]

+## [2026-05-13] ingest | OpenCode Zen Docker auth and provider models
+- Added source: `raw/fixes/opencode-zen-docker-auth-provider-models.md`.
+- Updated wiki page: `concepts/opencode-zen-big-pickle.md`.
+- Documents: no-auth Zen runtime fallback, auth-mounted Docker switching back to Codex defaults, first-turn materialization as a transient live-state condition, provider-model-first loading, and the build-time Docker install workaround for runtime `pnpm dlx` OOM risk.
+- Updated `overview.md`, `entities/codex-web-local.md`, and `index.md`.
+
Evidence
PR Compliance ID 7 requires wiki ingest operations to append exactly one new entry to
llm-wiki/wiki/log.md and keep it append-only. The diff shows the new 2026-05-13 entry inserted at
the top of the log (right-side lines 3-8), indicating a non-append edit.

AGENTS.md
llm-wiki/wiki/log.md[3-8]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`llm-wiki/wiki/log.md` must be append-only, but the new ingest entry was prepended near the top of the file.
## Issue Context
Compliance requires that ingest operations append exactly one new entry to the end of `llm-wiki/wiki/log.md` without rewriting existing history.
## Fix Focus Areas
- llm-wiki/wiki/log.md[3-8]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Overlay hides new errors ✓ Resolved 🐞 Bug ≡ Correctness
Description
In useDesktopState.selectedLiveOverlay, errorText is blanked whenever any persisted message in the
thread is a turnError, which can suppress the live overlay for a subsequent failed turn until the
new failure is persisted (or forever if message loading fails). This can leave the user with no
visible error for the new failure during the persistence gap.
Code

src/composables/useDesktopState.ts[R1524-1526]

+    const hasPersistedTurnError = !isInProgress && (persistedMessagesByThreadId.value[threadId] ?? [])
+      .some((message) => message.messageType === 'turnError')
+    const errorText = hasPersistedTurnError ? '' : (turnErrorByThreadId.value[threadId]?.message ?? '').trim()
Evidence
The overlay suppression is based on the existence of any persisted turnError, while
turnErrorByThreadId is updated for each new failure and persistedMessages are updated asynchronously
via loadMessages/getThreadDetail. This creates a window where a new failure’s live overlay can be
suppressed simply because an older persisted turnError exists.

src/composables/useDesktopState.ts[1515-1527]
src/composables/useDesktopState.ts[3694-3705]
src/composables/useDesktopState.ts[4208-4271]
src/api/normalizers/v2.ts[626-649]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`selectedLiveOverlay` suppresses `errorText` if *any* persisted `turnError` exists in the thread, which can hide the live overlay error for later failures until `loadMessages()` persists the new error (and if that fetch fails, the new error may never be shown).
### Issue Context
- `turnErrorByThreadId` is set immediately on `turn/completed` failures.
- Persisted `turnError` messages are only updated after `getThreadDetail()`/`loadMessages()` returns normalized thread history.
- Current suppression condition does not correlate the persisted `turnError` to the active failure.
### Fix Focus Areas
- src/composables/useDesktopState.ts[1515-1534]
### Fix approach
Update the suppression logic to hide the live overlay only when the persisted history already contains the *same* failure (ideally same turnId/turnIndex, or at least the most-recent persisted turnError matching the live error message). For example:
- Compute `liveErrorText` from `turnErrorByThreadId`.
- Find the most recent persisted `turnError` (by message order / highest `turnIndex`).
- Suppress the overlay only if `!isInProgress` AND `liveErrorText` matches that most-recent persisted `turnError` (and/or its `turnId`).
This preserves the “no duplicate overlay after persistence” goal without hiding new failures when an older turn error exists.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended
4. Docs use wrong providerId 🐞 Bug ⚙ Maintainability ⭐ New
Description
The new Docker regression workflow asserts model_provider="opencode-zen", but the runtime
free-mode config args now emit underscore-safe provider ids (e.g. model_provider="opencode_zen").
This will cause manual Docker verification to fail expected/actual checks even when the code is
working correctly.
Code

AGENTS.md[R199-203]

+- Start fresh isolated containers on unique localhost ports for at least these cases:
+  - no auth file: no `/codex-home/auth.json`; expect runtime OpenCode Zen fallback, `model_provider="opencode-zen"`, `model="big-pickle"`, send `hi`, wait for an assistant reply.
+  - invalid/expired auth file: mount an `auth.json` with token fields containing invalid/expired strings; expect Codex provider path, send `hi`, wait for final 401/auth error rendered in chat, verify `Send feedback`, reload the thread, verify the error persists, and verify no duplicate live `Thinking` overlay remains after persistence.
+  - malformed auth file: mount invalid JSON as `/codex-home/auth.json`; expect it to be treated as unusable auth and fall back to Zen, then send `hi` and wait for a reply.
+  - provider switch: start from OpenCode Zen, send `hi` and wait for a reply, switch the Provider settings selector to OpenRouter (do not change the model dropdown directly), send `hi` again and wait for a reply.
Evidence
AGENTS.md’s newly added Docker workflow expects model_provider="opencode-zen", but the runtime
config builder sets model_provider using underscore-safe ids (opencode_zen / openrouter_free).
Therefore the documented expected config/read values are out of sync with the code that actually
produces those values.

AGENTS.md[191-203]
src/server/freeMode.ts[233-253]
src/server/freeMode.ts[278-291]
tests.md[5362-5368]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Documentation added/updated in this PR still expects hyphenated runtime provider ids (e.g. `opencode-zen`) in `config/read`, but the server now configures runtime providers with underscore-safe ids (e.g. `opencode_zen`, `openrouter_free`). This mismatch makes the documented Docker regression workflow misleading.

## Issue Context
The runtime ids come from `getFreeModeConfigArgs()` which now sets `model_provider` to `opencode_zen` / `openrouter_free` and uses matching `model_providers.<id>` keys.

## Fix Focus Areas
- AGENTS.md[191-204]
- src/server/freeMode.ts[233-253]
- src/server/freeMode.ts[278-291]
- tests.md[5352-5370]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. Overlay dedup normalization mismatch ✓ Resolved 🐞 Bug ≡ Correctness
Description
useDesktopState.selectedLiveOverlay compares a whitespace-collapsed persisted turn error against a
trim-only live error, so the same error text can fail equality when it contains newlines/multiple
spaces and the live overlay won’t be suppressed after persistence.
Code

src/composables/useDesktopState.ts[R1524-1537]

+    const liveErrorText = (turnErrorByThreadId.value[threadId]?.message ?? '').trim()
+    let latestPersistedTurnErrorText = ''
+    if (!isInProgress && liveErrorText) {
+      const persistedMessages = persistedMessagesByThreadId.value[threadId] ?? []
+      for (let index = persistedMessages.length - 1; index >= 0; index -= 1) {
+        const message = persistedMessages[index]
+        if (message.messageType !== 'turnError') continue
+        latestPersistedTurnErrorText = normalizeMessageText(message.text)
+        break
+      }
+    }
+    const errorText =
+      !isInProgress && liveErrorText && latestPersistedTurnErrorText === liveErrorText
+        ? ''
Evidence
The persisted error text is normalized with normalizeMessageText (collapsing \s+ to a single
space), while the live error is only trimmed; the de-dup equality check uses these mismatched forms.

src/composables/useDesktopState.ts[1515-1538]
src/composables/useDesktopState.ts[714-716]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`selectedLiveOverlay` normalizes persisted `turnError` text via `normalizeMessageText()` (collapses all whitespace), but compares it to `liveErrorText` which is only `.trim()`’d. If the same error contains internal whitespace/newlines, the comparison can fail and the live overlay remains visible even after the error has been persisted into chat history.
## Issue Context
This logic is intended to hide the live error overlay once the *same* failure is persisted as a `turnError` message.
## Fix Focus Areas
- Apply the same normalization on the live error for equality checks (or compare raw/trimmed on both sides).
- Preserve the original live error string for display if you don’t want to collapse newlines in the overlay.
### Suggested approach
Keep two variables:
- `rawLiveErrorText` for UI display
- `normalizedLiveErrorText = normalizeMessageText(rawLiveErrorText)` for comparison
Then compare `normalizedLiveErrorText` to `latestPersistedTurnErrorText`.
## Fix Focus Areas
- src/composables/useDesktopState.ts[1515-1538]
- src/composables/useDesktopState.ts[714-716]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


6. Redundant config/read on fallback 🐞 Bug ➹ Performance
Description
When resolveCollaborationModeSettings needs a fallback model, it calls getCurrentModelConfig()
(config/read) and then getAvailableModelIds(), which now fetches /codex-api/provider-models first;
that endpoint may call appServer.rpc('config/read') again, adding an extra round-trip on the
fallback-model selection path.
Code

src/api/codexGateway.ts[R1917-1923]

+export async function getAvailableModelIds(options: { includeProviderModels?: boolean; requireProviderModels?: boolean } = {}): Promise<string[]> {
+  const shouldIncludeProviderModels = options.includeProviderModels !== false
+  const providerModels = shouldIncludeProviderModels ? await fetchProviderModelIds() : null
+
+  if (providerModels?.exclusive || options.requireProviderModels) {
+    return providerModels?.ids ?? []
+  }
Evidence
The fallback-model resolver calls getCurrentModelConfig() then getAvailableModelIds();
getAvailableModelIds() now fetches /codex-api/provider-models first, and the server’s
provider-models endpoint falls back to readProviderBackedModelIds() which begins with
appServer.rpc('config/read'). This creates an avoidable second config/read round-trip.

src/api/codexGateway.ts[1667-1702]
src/api/codexGateway.ts[1917-1923]
src/api/codexGateway.ts[1941-1947]
src/server/codexAppServerBridge.ts[6736-6786]
src/server/codexAppServerBridge.ts[1259-1266]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`resolveCollaborationModeSettings()` already calls `getCurrentModelConfig()` (RPC `config/read`). If it needs a fallback model, it then calls `getAvailableModelIds()` with default options. After this PR, `getAvailableModelIds()` calls `/codex-api/provider-models` first, and the server-side `/codex-api/provider-models` handler can call `appServer.rpc('config/read')` again via `readProviderBackedModelIds()`, duplicating `config/read` work on a critical path.
## Issue Context
This impacts the “no explicit model, no configured model” path, where the UI is trying to quickly find *any* usable model to start a turn.
## Fix Focus Areas
- For the fallback-model path in `resolveCollaborationModeSettings`, call `getAvailableModelIds({ includeProviderModels: false })` so it goes directly to `model/list`.
- Keep provider-models-first behavior for the model selector / provider-backed contexts (where it matters).
## Fix Focus Areas
- src/api/codexGateway.ts[1667-1702]
- src/api/codexGateway.ts[1917-1933]
- src/server/codexAppServerBridge.ts[6736-6786]
- src/server/codexAppServerBridge.ts[1259-1266]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (1)
7. Merge before trim overhead ✓ Resolved 🐞 Bug ➹ Performance
Description
In the /codex-api/rpc handler, mergeStreamTurnErrorsIntoThreadResult maps over the full turns array
before trimThreadTurnsInRpcResult slices it, causing avoidable O(totalTurns) work and allocations
for large threads when the response ultimately returns only the last THREAD_RESPONSE_TURN_LIMIT
turns.
Code

src/server/codexAppServerBridge.ts[R6323-6326]

+        const errorMergedResult = THREAD_METHODS_WITH_TURNS.has(body.method)
+          ? mergeStreamTurnErrorsIntoThreadResult(appServer, rpcResult)
+          : rpcResult
+        const trimmedResult = trimThreadTurnsInRpcResult(body.method, errorMergedResult)
Evidence
The RPC handler merges stream errors first, and the merge implementation maps over every turn. The
trim function then slices down to a fixed limit, making the pre-trim full-array map unnecessary work
for large turn lists.

src/server/codexAppServerBridge.ts[6281-6330]
src/server/codexAppServerBridge.ts[970-1002]
src/server/codexAppServerBridge.ts[887-904]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`/codex-api/rpc` applies `mergeStreamTurnErrorsIntoThreadResult()` before `trimThreadTurnsInRpcResult()`, so it may iterate and clone a large `thread.turns` array even though most turns are immediately dropped.
### Issue Context
- `mergeStreamTurnErrorsIntoThreadResult` does `turns.map(...)` over the entire returned turns list.
- `trimThreadTurnsInRpcResult` exists to cap turns to `THREAD_RESPONSE_TURN_LIMIT`.
- In `/codex-api/rpc`, the returned payload is what matters; merging errors for turns that will be trimmed away provides no value.
### Fix Focus Areas
- src/server/codexAppServerBridge.ts[6281-6330]
- src/server/codexAppServerBridge.ts[887-904]
- src/server/codexAppServerBridge.ts[970-1012]
### Fix approach
In the `/codex-api/rpc` path, reorder processing so trimming happens before merging stream errors:
1) `const trimmedResult = trimThreadTurnsInRpcResult(body.method, rpcResult)`
2) if method has turns, run `mergeStreamTurnErrorsIntoThreadResult(appServer, trimmedResult)`
3) continue with sanitization/skill input merge.
This confines merging work to the slice that will actually be returned to the client.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Results up to commit N/A


🐞 Bugs (1) 📘 Rule violations (1) 📎 Requirement gaps (0)


Action required
1. Wiki log entry not appended 📘 Rule violation ◔ Observability
Description
The new ingest entry in llm-wiki/wiki/log.md was inserted at the top of the file instead of being
appended, breaking the append-only audit trail requirement. This can undermine traceability
expectations for wiki ingest operations.
Code

llm-wiki/wiki/log.md[R3-8]

+## [2026-05-13] ingest | OpenCode Zen Docker auth and provider models
+- Added source: `raw/fixes/opencode-zen-docker-auth-provider-models.md`.
+- Updated wiki page: `concepts/opencode-zen-big-pickle.md`.
+- Documents: no-auth Zen runtime fallback, auth-mounted Docker switching back to Codex defaults, first-turn materialization as a transient live-state condition, provider-model-first loading, and the build-time Docker install workaround for runtime `pnpm dlx` OOM risk.
+- Updated `overview.md`, `entities/codex-web-local.md`,...

@qodo-free-for-open-source-projects
Copy link
Copy Markdown

Review Summary by Qodo

🐞 Bug fix ✨ Enhancement

Grey Divider

Walkthroughs

Description
• Handle Docker first-turn live-state pending materialization gracefully
• Load provider models before Codex model list for accurate selectors
• Render failed turn errors as persisted chat messages with feedback
• Avoid persisting unauthenticated OpenCode Zen fallback configuration
Diagram
flowchart LR
  A["Docker startup"] --> B["Check auth state"]
  B --> C["No auth"]
  B --> D["Auth exists"]
  C --> E["Runtime Zen fallback<br/>no persist"]
  D --> F["Use Codex provider"]
  E --> G["Load provider models first"]
  F --> G
  G --> H["Model selector ready"]
  I["First turn starts"] --> J["Live-state poll"]
  J --> K["Materialization pending?"]
  K -->|Yes| L["Return empty state<br/>no error"]
  K -->|No| M["Return turn data"]
  N["Turn fails"] --> O["Merge stream errors<br/>into thread result"]
  O --> P["Render error in chat<br/>with feedback button"]
Loading

Grey Divider

File Changes

1. src/api/codexGateway.ts ✨ Enhancement +31/-27

Refactor provider model loading priority

src/api/codexGateway.ts


2. src/api/codexGateway.test.ts 🧪 Tests +64/-1

Add provider-first model loading tests

src/api/codexGateway.test.ts


3. src/api/normalizers/v2.ts ✨ Enhancement +16/-0

Render failed turn errors as system messages

src/api/normalizers/v2.ts


View more (15)
4. src/api/normalizers/v2.test.ts 🧪 Tests +26/-0

Test failed turn error message rendering

src/api/normalizers/v2.test.ts


5. src/server/codexAppServerBridge.ts ✨ Enhancement +100/-11

Handle materialization pending and merge stream errors

src/server/codexAppServerBridge.ts


6. src/server/codexAppServerBridge.archive.test.ts 🧪 Tests +77/-1

Test pending materialization and auth state handling

src/server/codexAppServerBridge.archive.test.ts


7. src/server/freeMode.ts ⚙️ Configuration changes +1/-1

Rename state file to webui-custom-providers.json

src/server/freeMode.ts


8. src/server/freeMode.test.ts 🧪 Tests +2/-2

Update test descriptions for runtime fallback

src/server/freeMode.test.ts


9. src/composables/useDesktopState.ts 🐞 Bug fix +3/-1

Hide duplicate live error when turn persisted

src/composables/useDesktopState.ts


10. src/components/content/ThreadConversation.vue ✨ Enhancement +24/-0

Add feedback button to turn error messages

src/components/content/ThreadConversation.vue


11. src/style.css Formatting +4/-0

Style turn error feedback button dark theme

src/style.css


12. llm-wiki/raw/fixes/opencode-zen-docker-auth-provider-models.md 📝 Documentation +78/-0

Document Docker auth and provider model fixes

llm-wiki/raw/fixes/opencode-zen-docker-auth-provider-models.md


13. llm-wiki/wiki/concepts/opencode-zen-big-pickle.md 📝 Documentation +13/-0

Add Docker auth switching and model loading notes

llm-wiki/wiki/concepts/opencode-zen-big-pickle.md


14. llm-wiki/wiki/entities/codex-web-local.md 📝 Documentation +3/-0

Update Zen fallback and auth switching documentation

llm-wiki/wiki/entities/codex-web-local.md


15. llm-wiki/wiki/index.md 📝 Documentation +2/-1

Link new Docker auth fix documentation

llm-wiki/wiki/index.md


16. llm-wiki/wiki/overview.md 📝 Documentation +3/-0

Add Docker auth and provider model topics

llm-wiki/wiki/overview.md


17. llm-wiki/wiki/log.md 📝 Documentation +6/-0

Log Docker auth and provider models fix ingestion

llm-wiki/wiki/log.md


18. tests.md 📝 Documentation +99/-5

Update free mode state filename and add Docker tests

tests.md


Grey Divider

Qodo Logo

@qodo-free-for-open-source-projects
Copy link
Copy Markdown

qodo-free-for-open-source-projects Bot commented May 13, 2026

Code Review by Qodo

Grey Divider

New Review Started

This review has been superseded by a new analysis

Grey Divider

Qodo Logo

@friuns2
Copy link
Copy Markdown
Owner Author

friuns2 commented May 13, 2026

/review

@qodo-free-for-open-source-projects
Copy link
Copy Markdown

qodo-free-for-open-source-projects Bot commented May 13, 2026

Code Review by Qodo

🐞 Bugs (2) 📘 Rule violations (0)

Grey Divider


Action required

1. Overlay hides new errors 🐞 Bug ≡ Correctness
Description
In useDesktopState.selectedLiveOverlay, errorText is blanked whenever any persisted message in the
thread is a turnError, which can suppress the live overlay for a subsequent failed turn until the
new failure is persisted (or forever if message loading fails). This can leave the user with no
visible error for the new failure during the persistence gap.
Code

src/composables/useDesktopState.ts[R1524-1526]

+    const hasPersistedTurnError = !isInProgress && (persistedMessagesByThreadId.value[threadId] ?? [])
+      .some((message) => message.messageType === 'turnError')
+    const errorText = hasPersistedTurnError ? '' : (turnErrorByThreadId.value[threadId]?.message ?? '').trim()
Evidence
The overlay suppression is based on the existence of any persisted turnError, while
turnErrorByThreadId is updated for each new failure and persistedMessages are updated asynchronously
via loadMessages/getThreadDetail. This creates a window where a new failure’s live overlay can be
suppressed simply because an older persisted turnError exists.

src/composables/useDesktopState.ts[1515-1527]
src/composables/useDesktopState.ts[3694-3705]
src/composables/useDesktopState.ts[4208-4271]
src/api/normalizers/v2.ts[626-649]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`selectedLiveOverlay` suppresses `errorText` if *any* persisted `turnError` exists in the thread, which can hide the live overlay error for later failures until `loadMessages()` persists the new error (and if that fetch fails, the new error may never be shown).

### Issue Context
- `turnErrorByThreadId` is set immediately on `turn/completed` failures.
- Persisted `turnError` messages are only updated after `getThreadDetail()`/`loadMessages()` returns normalized thread history.
- Current suppression condition does not correlate the persisted `turnError` to the active failure.

### Fix Focus Areas
- src/composables/useDesktopState.ts[1515-1534]

### Fix approach
Update the suppression logic to hide the live overlay only when the persisted history already contains the *same* failure (ideally same turnId/turnIndex, or at least the most-recent persisted turnError matching the live error message). For example:
- Compute `liveErrorText` from `turnErrorByThreadId`.
- Find the most recent persisted `turnError` (by message order / highest `turnIndex`).
- Suppress the overlay only if `!isInProgress` AND `liveErrorText` matches that most-recent persisted `turnError` (and/or its `turnId`).
This preserves the “no duplicate overlay after persistence” goal without hiding new failures when an older turn error exists.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

2. Merge before trim overhead 🐞 Bug ➹ Performance
Description
In the /codex-api/rpc handler, mergeStreamTurnErrorsIntoThreadResult maps over the full turns array
before trimThreadTurnsInRpcResult slices it, causing avoidable O(totalTurns) work and allocations
for large threads when the response ultimately returns only the last THREAD_RESPONSE_TURN_LIMIT
turns.
Code

src/server/codexAppServerBridge.ts[R6323-6326]

+        const errorMergedResult = THREAD_METHODS_WITH_TURNS.has(body.method)
+          ? mergeStreamTurnErrorsIntoThreadResult(appServer, rpcResult)
+          : rpcResult
+        const trimmedResult = trimThreadTurnsInRpcResult(body.method, errorMergedResult)
Evidence
The RPC handler merges stream errors first, and the merge implementation maps over every turn. The
trim function then slices down to a fixed limit, making the pre-trim full-array map unnecessary work
for large turn lists.

src/server/codexAppServerBridge.ts[6281-6330]
src/server/codexAppServerBridge.ts[970-1002]
src/server/codexAppServerBridge.ts[887-904]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`/codex-api/rpc` applies `mergeStreamTurnErrorsIntoThreadResult()` before `trimThreadTurnsInRpcResult()`, so it may iterate and clone a large `thread.turns` array even though most turns are immediately dropped.

### Issue Context
- `mergeStreamTurnErrorsIntoThreadResult` does `turns.map(...)` over the entire returned turns list.
- `trimThreadTurnsInRpcResult` exists to cap turns to `THREAD_RESPONSE_TURN_LIMIT`.
- In `/codex-api/rpc`, the returned payload is what matters; merging errors for turns that will be trimmed away provides no value.

### Fix Focus Areas
- src/server/codexAppServerBridge.ts[6281-6330]
- src/server/codexAppServerBridge.ts[887-904]
- src/server/codexAppServerBridge.ts[970-1012]

### Fix approach
In the `/codex-api/rpc` path, reorder processing so trimming happens before merging stream errors:
1) `const trimmedResult = trimThreadTurnsInRpcResult(body.method, rpcResult)`
2) if method has turns, run `mergeStreamTurnErrorsIntoThreadResult(appServer, trimmedResult)`
3) continue with sanitization/skill input merge.
This confines merging work to the slice that will actually be returned to the client.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

Qodo Logo

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@src/composables/useDesktopState.ts`:
- Around line 1524-1526: The current hasPersistedTurnError check hides live
overlay errors if any persisted 'turnError' exists in the thread; change it to
only suppress when a persisted turnError matches the current live error. Update
the logic that computes hasPersistedTurnError (which uses
persistedMessagesByThreadId and is used when computing errorText with
turnErrorByThreadId, isInProgress, and threadId) so the .some(...) predicate
also compares the persisted message content to
turnErrorByThreadId.value[threadId]?.message (or the appropriate identifier on
the current turn error) before treating it as suppression.

In `@src/server/codexAppServerBridge.ts`:
- Around line 3352-3358: The routes that currently read the FREE_MODE_STATE_FILE
directly should use the same resolver so runtime and HTTP proxy paths see the
same fallback: replace direct calls to readFreeModeStateSync inside the handlers
for the /codex-api/zen-proxy/v1/responses and
/codex-api/custom-proxy/v1/responses endpoints with
ensureDefaultFreeModeStateForMissingAuthSync(statePath) (or call that helper
from a shared resolver function) and handle a null return consistently; ensure
the handlers accept the in-memory fallback returned by
ensureDefaultFreeModeStateForMissingAuthSync (or short-circuit if null) so the
Docker Zen fallback behaves identically to the app-server runtime path.

In `@tests.md`:
- Line 3376: The sentence in tests.md that currently reads "No config files
outside the project are modified (state stored in
`~/.codex/webui-custom-providers.json`)" is self-contradictory; update it to
clearly state the exception for user-level state by rewording to something like:
"Project config files are not modified; only user-level state is written to
`~/.codex/webui-custom-providers.json`." Locate the sentence mentioning
`~/.codex/webui-custom-providers.json` and replace it with this unambiguous
phrasing so test expectations no longer conflict.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro Plus

Run ID: 48dd27b2-953c-460f-90b3-c52b826e9521

📥 Commits

Reviewing files that changed from the base of the PR and between 1c9dacd and 9f32963.

📒 Files selected for processing (18)
  • llm-wiki/raw/fixes/opencode-zen-docker-auth-provider-models.md
  • llm-wiki/wiki/concepts/opencode-zen-big-pickle.md
  • llm-wiki/wiki/entities/codex-web-local.md
  • llm-wiki/wiki/index.md
  • llm-wiki/wiki/log.md
  • llm-wiki/wiki/overview.md
  • src/api/codexGateway.test.ts
  • src/api/codexGateway.ts
  • src/api/normalizers/v2.test.ts
  • src/api/normalizers/v2.ts
  • src/components/content/ThreadConversation.vue
  • src/composables/useDesktopState.ts
  • src/server/codexAppServerBridge.archive.test.ts
  • src/server/codexAppServerBridge.ts
  • src/server/freeMode.test.ts
  • src/server/freeMode.ts
  • src/style.css
  • tests.md

Comment thread src/composables/useDesktopState.ts Outdated
Comment thread src/server/codexAppServerBridge.ts
Comment thread tests.md Outdated
Comment thread src/composables/useDesktopState.ts Outdated
@friuns2
Copy link
Copy Markdown
Owner Author

friuns2 commented May 13, 2026

/review

@qodo-free-for-open-source-projects
Copy link
Copy Markdown

qodo-free-for-open-source-projects Bot commented May 13, 2026

Persistent review updated to latest commit 8d63a4d

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
src/server/codexAppServerBridge.ts (1)

3352-3358: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Route handlers still bypass the new in-memory fallback.

ensureDefaultFreeModeStateForMissingAuthSync() now returns the Zen fallback without writing FREE_MODE_STATE_FILE, but /codex-api/zen-proxy/v1/responses and /codex-api/custom-proxy/v1/responses still read that file directly. On a fresh no-auth startup, the app-server and /codex-api/provider-models will see Zen fallback while those proxy routes still see “no state”, so the Docker fallback path can break as soon as the request goes through the proxy.

Suggested follow-up
- const state = JSON.parse(readFileSync(statePath, 'utf8')) as FreeModeState
- bearerToken = state.apiKey ?? ''
- wireApi = state.wireApi === 'responses' ? 'responses' : 'chat'
+ const state = ensureDefaultFreeModeStateForMissingAuthSync(statePath)
+ bearerToken = state?.apiKey ?? ''
+ wireApi = state?.wireApi === 'responses' ? 'responses' : 'chat'

Apply the same resolver in the custom-proxy handler as well.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/server/codexAppServerBridge.ts` around lines 3352 - 3358, Route handlers
for /codex-api/zen-proxy/v1/responses and /codex-api/custom-proxy/v1/responses
still call readFreeModeStateSync directly and thus bypass the in-memory fallback
provided by ensureDefaultFreeModeStateForMissingAuthSync; update those handlers
to call ensureDefaultFreeModeStateForMissingAuthSync(statePath) (or reuse its
resolved value) instead of readFreeModeStateSync so they receive the Zen
fallback when auth is missing, and mirror the same change in the custom-proxy
handler so both proxy routes behave consistently with
createDefaultOpenCodeZenFreeModeState().
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Duplicate comments:
In `@src/server/codexAppServerBridge.ts`:
- Around line 3352-3358: Route handlers for /codex-api/zen-proxy/v1/responses
and /codex-api/custom-proxy/v1/responses still call readFreeModeStateSync
directly and thus bypass the in-memory fallback provided by
ensureDefaultFreeModeStateForMissingAuthSync; update those handlers to call
ensureDefaultFreeModeStateForMissingAuthSync(statePath) (or reuse its resolved
value) instead of readFreeModeStateSync so they receive the Zen fallback when
auth is missing, and mirror the same change in the custom-proxy handler so both
proxy routes behave consistently with createDefaultOpenCodeZenFreeModeState().

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro Plus

Run ID: d0a34474-ae20-424c-9dec-b0755d13c392

📥 Commits

Reviewing files that changed from the base of the PR and between aa10f66 and 8d63a4d.

📒 Files selected for processing (5)
  • src/composables/useDesktopState.test.ts
  • src/composables/useDesktopState.ts
  • src/server/codexAppServerBridge.ts
  • tests.md
  • whatToTest.md
✅ Files skipped from review due to trivial changes (2)
  • whatToTest.md
  • tests.md

Comment thread llm-wiki/wiki/log.md Outdated
Comment on lines +3 to +8
## [2026-05-13] ingest | OpenCode Zen Docker auth and provider models
- Added source: `raw/fixes/opencode-zen-docker-auth-provider-models.md`.
- Updated wiki page: `concepts/opencode-zen-big-pickle.md`.
- Documents: no-auth Zen runtime fallback, auth-mounted Docker switching back to Codex defaults, first-turn materialization as a transient live-state condition, provider-model-first loading, and the build-time Docker install workaround for runtime `pnpm dlx` OOM risk.
- Updated `overview.md`, `entities/codex-web-local.md`, and `index.md`.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. Wiki log entry not appended 📘 Rule violation ◔ Observability

The new ingest entry in llm-wiki/wiki/log.md was inserted at the top of the file instead of being
appended, breaking the append-only audit trail requirement. This can undermine traceability
expectations for wiki ingest operations.
Agent Prompt
## Issue description
`llm-wiki/wiki/log.md` must be append-only, but the new ingest entry was prepended near the top of the file.

## Issue Context
Compliance requires that ingest operations append exactly one new entry to the end of `llm-wiki/wiki/log.md` without rewriting existing history.

## Fix Focus Areas
- llm-wiki/wiki/log.md[3-8]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
src/server/freeMode.test.ts (1)

37-55: ⚡ Quick win

Add coverage for Zen/user-key and auth-disabled guard branches.

Current assertions only validate openrouter + customKey cases. Please also cover the Line 212 behavior (Zen with non-empty API key should not suppress) and the !hasUsableCodexAuth/!enabled guards so future edits don’t regress fallback selection.

Suggested test additions
   it('suppresses community fallback providers when Codex auth appears', () => {
@@
     expect(shouldSuppressCommunityFreeModeForCodexAuth({
       enabled: true,
       apiKey: 'user-key',
       model: FREE_MODE_DEFAULT_MODEL,
       customKey: true,
       provider: 'openrouter',
       wireApi: 'responses',
     }, true)).toBe(false)
+
+    expect(shouldSuppressCommunityFreeModeForCodexAuth({
+      enabled: true,
+      apiKey: 'zen-user-key',
+      model: OPENCODE_ZEN_DEFAULT_MODEL,
+      customKey: false,
+      provider: 'opencode-zen',
+      wireApi: 'responses',
+    }, true)).toBe(false)
+
+    expect(shouldSuppressCommunityFreeModeForCodexAuth({
+      enabled: false,
+      apiKey: null,
+      model: OPENCODE_ZEN_DEFAULT_MODEL,
+      provider: 'opencode-zen',
+      wireApi: 'responses',
+    }, true)).toBe(false)
+
+    expect(shouldSuppressCommunityFreeModeForCodexAuth({
+      enabled: true,
+      apiKey: 'community-key',
+      model: FREE_MODE_DEFAULT_MODEL,
+      customKey: false,
+      provider: 'openrouter',
+      wireApi: 'responses',
+    }, false)).toBe(false)
   })
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/server/freeMode.test.ts` around lines 37 - 55, Add tests to cover the
missing branches in shouldSuppressCommunityFreeModeForCodexAuth: assert that
when provider is 'zen' with a non-empty apiKey (e.g., apiKey: 'user-key',
customKey: true/false as appropriate) the function returns false (Line 212
behavior), add a case where hasUsableCodexAuth is false to ensure it returns
false, and add a case where enabled is false to ensure the guard prevents
suppression; use the same FREE_MODE_DEFAULT_MODEL in each test to mirror
existing cases and reference the function name
shouldSuppressCommunityFreeModeForCodexAuth and constant FREE_MODE_DEFAULT_MODEL
to locate where to add these assertions.
llm-wiki/wiki/log.md (1)

4-13: ⚡ Quick win

Use relative markdown links for newly added log references.

These new entries add file references as inline code paths, but they should be markdown links for in-wiki navigation consistency.

Proposed patch
- - Added source: `raw/fixes/copied-auth-provider-promotion.md`.
- - Updated wiki page: `concepts/opencode-zen-big-pickle.md`.
+ - Added source: [raw/fixes/copied-auth-provider-promotion.md](../raw/fixes/copied-auth-provider-promotion.md).
+ - Updated wiki page: [concepts/opencode-zen-big-pickle.md](./concepts/opencode-zen-big-pickle.md).
  - Documents: suppressing community fallback provider state after valid Codex auth appears, preserving user-configured providers, importing copied auth into Accounts, provider-scoped Codex model persistence, stale feedback-row cleanup, and packaged Docker validation.
- - Updated `index.md`.
+ - Updated [index.md](./index.md).

- - Added source: `raw/fixes/opencode-zen-docker-auth-provider-models.md`.
- - Updated wiki page: `concepts/opencode-zen-big-pickle.md`.
+ - Added source: [raw/fixes/opencode-zen-docker-auth-provider-models.md](../raw/fixes/opencode-zen-docker-auth-provider-models.md).
+ - Updated wiki page: [concepts/opencode-zen-big-pickle.md](./concepts/opencode-zen-big-pickle.md).
  - Documents: no-auth Zen runtime fallback, auth-mounted Docker switching back to Codex defaults, first-turn materialization as a transient live-state condition, provider-model-first loading, and the build-time Docker install workaround for runtime `pnpm dlx` OOM risk.
- - Updated `overview.md`, `entities/codex-web-local.md`, and `index.md`.
+ - Updated [overview.md](./overview.md), [entities/codex-web-local.md](./entities/codex-web-local.md), and [index.md](./index.md).

As per coding guidelines, "llm-wiki/wiki/**/*.md: Add wiki links using relative markdown links in llm-wiki/wiki/ pages".

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@llm-wiki/wiki/log.md` around lines 4 - 13, Replace the inline code-style file
references in llm-wiki/wiki/log.md with relative markdown links so they navigate
within the wiki; for each occurrence of
raw/fixes/copied-auth-provider-promotion.md,
concepts/opencode-zen-big-pickle.md,
raw/fixes/opencode-zen-docker-auth-provider-models.md, overview.md,
entities/codex-web-local.md, and index.md convert the backticked path into a
relative link like [filename](./relative/path) pointing into the llm-wiki/wiki/
tree, preserving the displayed text but using Markdown link syntax for
consistent in-wiki navigation.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@src/App.vue`:
- Around line 4042-4058: The import path using externalAuthImportAttempted +
refreshAccountsFromAuth can mutate accounts.value without guaranteeing a
refreshAll run after the import completes or before a provider-change-triggered
refresh starts; modify the logic around
externalAuthImportAttempted/refreshAccountsFromAuth/loadAccountsState so you
await the import/fallback (use the Promise from
refreshAccountsFromAuth().catch(...) or loadAccountsState when fallback runs),
detect whether the auth/accounts actually changed (compare previous accounts or
set a flag), and then call refreshAll(...) once after both the awaited
import/fallback and any provider selection change have settled (use
selectedProvider/previousProvider to detect provider change) to ensure
quota/models are refreshed only after account import finishes and to avoid
racing refreshes.

---

Nitpick comments:
In `@llm-wiki/wiki/log.md`:
- Around line 4-13: Replace the inline code-style file references in
llm-wiki/wiki/log.md with relative markdown links so they navigate within the
wiki; for each occurrence of raw/fixes/copied-auth-provider-promotion.md,
concepts/opencode-zen-big-pickle.md,
raw/fixes/opencode-zen-docker-auth-provider-models.md, overview.md,
entities/codex-web-local.md, and index.md convert the backticked path into a
relative link like [filename](./relative/path) pointing into the llm-wiki/wiki/
tree, preserving the displayed text but using Markdown link syntax for
consistent in-wiki navigation.

In `@src/server/freeMode.test.ts`:
- Around line 37-55: Add tests to cover the missing branches in
shouldSuppressCommunityFreeModeForCodexAuth: assert that when provider is 'zen'
with a non-empty apiKey (e.g., apiKey: 'user-key', customKey: true/false as
appropriate) the function returns false (Line 212 behavior), add a case where
hasUsableCodexAuth is false to ensure it returns false, and add a case where
enabled is false to ensure the guard prevents suppression; use the same
FREE_MODE_DEFAULT_MODEL in each test to mirror existing cases and reference the
function name shouldSuppressCommunityFreeModeForCodexAuth and constant
FREE_MODE_DEFAULT_MODEL to locate where to add these assertions.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro Plus

Run ID: e59d000b-dea1-48ea-8290-652994064484

📥 Commits

Reviewing files that changed from the base of the PR and between 8d63a4d and 4d62f2c.

📒 Files selected for processing (13)
  • llm-wiki/raw/fixes/copied-auth-provider-promotion.md
  • llm-wiki/wiki/concepts/opencode-zen-big-pickle.md
  • llm-wiki/wiki/index.md
  • llm-wiki/wiki/log.md
  • src/App.vue
  • src/api/codexGateway.ts
  • src/composables/useDesktopState.test.ts
  • src/composables/useDesktopState.ts
  • src/server/codexAppServerBridge.archive.test.ts
  • src/server/codexAppServerBridge.ts
  • src/server/freeMode.test.ts
  • src/server/freeMode.ts
  • tests.md
🚧 Files skipped from review as they are similar to previous changes (4)
  • src/composables/useDesktopState.test.ts
  • tests.md
  • src/server/codexAppServerBridge.archive.test.ts
  • src/server/codexAppServerBridge.ts

Comment thread src/App.vue Outdated
@friuns2
Copy link
Copy Markdown
Owner Author

friuns2 commented May 13, 2026

/review

@qodo-free-for-open-source-projects
Copy link
Copy Markdown

qodo-free-for-open-source-projects Bot commented May 13, 2026

Code Review by Qodo

Grey Divider

New Review Started

This review has been superseded by a new analysis

Grey Divider

Qodo Logo

@friuns2
Copy link
Copy Markdown
Owner Author

friuns2 commented May 13, 2026

/review

@qodo-free-for-open-source-projects
Copy link
Copy Markdown

qodo-free-for-open-source-projects Bot commented May 13, 2026

Persistent review updated to latest commit d939a17

Comment thread src/api/normalizers/v2.ts
@friuns2
Copy link
Copy Markdown
Owner Author

friuns2 commented May 13, 2026

/review

@qodo-free-for-open-source-projects
Copy link
Copy Markdown

qodo-free-for-open-source-projects Bot commented May 13, 2026

Persistent review updated to latest commit b51f44e

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants