Skip to content

Fixes from problematic pr worth keeping#415

Closed
OisinKyne wants to merge 4 commits intomainfrom
oisin/413-1
Closed

Fixes from problematic pr worth keeping#415
OisinKyne wants to merge 4 commits intomainfrom
oisin/413-1

Conversation

@OisinKyne
Copy link
Copy Markdown
Contributor

@OisinKyne OisinKyne commented May 4, 2026

Problem to be solved

PR #413 is satisfactory and
slightly cleaner than what we
did. Compared side-by-side:

Equivalent or better:

  • Bootstrap rip-out, same
    architecture (image's
    /opt/hermes/.venv)
  • Bumps image v2026.4.23 →
    v2026.4.30 (we kept the older
    pin)
  • Fail-fast init-container
    check ("missing binary" /
    "missing extras") — slightly
    more defensive than what we had
  • agent wallet backup/restore
    usage strings made
    runtime-neutral
  • applyWalletMetadataConfigMap
    call added to Hermes restore —
    we missed this; restore would
    have left a stale
    wallet-metadata ConfigMap in
    the cluster after a wallet swap
  • Adds round-trip
    backup/restore tests for the
    Hermes path — we didn't write
    these

Not in #413 (consistent with
the PR body's note that they
punted on these intentionally
or scope-wise):

  1. Helmfile --server-side=true
    --force-conflicts on
    helmDefaults for both
    helmfiles. PR fix: simplify Hermes recreation lifecycle #413 says "main
    has the managed-fields
    mitigation" — but we now know
    the
    releaseLiteLLMConfigOwnership
    mitigation doesn't actually
    work (you hit
    before-first-apply on it). So
    the SSA conflict on
    litellm-config and
    remote-signer-keystore-password
    is unaddressed by main + fix: simplify Hermes recreation lifecycle #413
    together.
  2. ensureDevRegistries hoisted
    into both branches of
    K3dBackend.Up (so stopped
    registry caches come back on
    every up, not just on cluster
    create)
  3. docker rm -f recovery in
    ensureDevRegistry when start
    fails because the container's
    referenced Docker network was
    reaped

Recommendation: merge #413
as-is, then branch fresh from
main with those three
leftovers. They're tightly
scoped, mutually independent,
and not adjacent to the
bootstrap-rip-out diff.
Ordering inside the new branch:

Order: 1
Fix: helm SSA force-conflicts
Files: internal/embed/infrastru
cture/helmfile.yaml,
internal/hermes/hermes.go:gen
erateHelmfile
───────────────────────────────

Order: 2
Fix: Registry restart on every
up
Files: internal/stack/backend_k
3d.go:Up
───────────────────────────────

Order: 3
Fix: Registry orphaned-network
recovery
Ordering inside the new branch:

┌─────┬──────────────┬──────────────────────────────────────┐
│ Ord │ Fix │ Files │
│ er │ │ │
├─────┼──────────────┼──────────────────────────────────────┤
│ │ helm SSA for │ internal/embed/infrastructure/helmfi │
│ 1 │ ce-conflicts │ le.yaml, internal/hermes/hermes.go:g │
│ │ │ enerateHelmfile │
├─────┼──────────────┼──────────────────────────────────────┤
│ │ Registry │ │
│ 2 │ restart on │ internal/stack/backend_k3d.go:Up │
│ │ every up │ │
├─────┼──────────────┼──────────────────────────────────────┤
│ │ Registry │ │
│ 3 │ orphaned-net │ internal/stack/dev_registry.go:ensur │
│ │ work │ eDevRegistry │
│ │ recovery │ │
└─────┴──────────────┴──────────────────────────────────────┘

I'd also drop the releaseLiteLLMConfigOwnership from main in
that same fresh branch — it's load-bearing today (per the PR
body), but with --force-conflicts it becomes dead code, same
reasoning as we worked through earlier today.

One nit on #413 worth a one-line review comment, not blocking:
the init-hermes-data container in #413 runs as the pod default
user (uid 10000) — which is fine for the mkdir + state.db path,
but if the PVC has any pre-existing root-owned content from
won't bite anyone in practice but worth flagging.

Summary

All five tasks complete, build clean, tests passing.

Summary of changes on oisin/413-1:

File: internal/embed/infrastructure/helmfile.yaml
Change: New helmDefaults.args: ["--server-side=true",
"--force-conflicts"] block at the top
────────────────────────────────────────
File: internal/hermes/hermes.go (generateHelmfile)
Change: Same helmDefaults block in the generated per-instance
Hermes helmfile
────────────────────────────────────────
File: internal/stack/backend_k3d.go (Up)
Change: Hoisted ensureDevRegistries out of the else-only branch
so it runs on every Up — stopped registry containers come
back
even when the cluster is just being restarted
────────────────────────────────────────
File: internal/stack/dev_registry.go (ensureDevRegistry)
Change: When docker start fails on an existing stopped container

(typical cause: dangling network reference after        
reclaimLeakedDevK3dNetworks), force-rm the dead shell and fall

through to recreate. Cache content lives on the host

bind-mount
so layers survive
────────────────────────────────────────
File: internal/stack/stack.go
Change: Dropped the now-redundant releaseLiteLLMConfigOwnership
function and its call site in syncDefaults — --force-conflicts

is the load-bearing fix

Net effect: obol stack down/up cycles should be idempotent on
shared SSA fields, and dev-mode retries reuse cached image
layers instead of re-pulling from upstream every time.

@OisinKyne OisinKyne requested a review from bussyjd May 4, 2026 13:48
@bussyjd bussyjd mentioned this pull request May 4, 2026
6 tasks
@OisinKyne
Copy link
Copy Markdown
Contributor Author

favouring #417 and upstreams

@OisinKyne OisinKyne closed this May 4, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants