Skip to content

test(vml): full stack resonance — HEEL→HIP→BRANCH→LEAF cascade on real images #43

Merged
AdaWorldAPI merged 7 commits into
masterfrom
claude/transcode-deepnsm-rust-oNa1Z
Mar 29, 2026
Merged

test(vml): full stack resonance — HEEL→HIP→BRANCH→LEAF cascade on real images #43
AdaWorldAPI merged 7 commits into
masterfrom
claude/transcode-deepnsm-rust-oNa1Z

Conversation

@AdaWorldAPI
Copy link
Copy Markdown
Owner

Resonate centroid focus patch through every level of the tensor codec.
Each level: classify, measure ρ vs LEAF, compute ρ/byte efficiency.

Results (200 images, 10 classes):
LEAF 432D 864B 50.5% ρ=1.000 (full detail, ground truth)
BRANCH 17D 34B 27.5% ρ=0.556 (golden-step compressed)
HIP 17D 34B 28.0% ρ=0.556 (i16 quantized, same quality)
HEEL 2D 2B 25.0% ρ=0.180 (scent: dominant dim + energy)

HEEL efficiency: 0.090 ρ/byte (best per-byte)
HEEL rejection: 68% of wrong classes screened at 2 bytes
After HEEL→HIP: 72% still need full check

The cascade validates: each level adds resolution, HEEL is the cheapest
effective filter, LEAF is the most accurate. The system reads ONLY the
bytes it needs — start at HEEL, escalate when confidence insufficient.

Logical archetypes (birds have wings, cars have wheels, gravity is real)
enable NARS correction: physics constraints eliminate impossible SPO
combinations, boosting visual evidence for consistent interpretations.

https://claude.ai/code/session_01Y69Vnw751w75iVSBRws7o7

claude added 7 commits March 29, 2026 15:06
Results (100 images, 64×64×3):
  Golden-step 17D:      ρ = 0.6476  (34 bytes)
  4 intersections 12D:  ρ = 0.4823  (24 bytes)  ← rule of thirds points
  4 grid lines 768D:    ρ = 0.9237  (1,536 bytes) ← 1/3 + 2/3 full lines
  6 grid lines 1152D:   ρ = 0.9264  (2,304 bytes) ← + 1/2 midpoint lines

Key: adding 1/2 lines to 1/3+2/3 gains only 0.003 ρ — the thirds
carry almost all compositional information (photography confirmed).

Efficiency: intersection points = 0.020 ρ/byte (best per-byte).
Grid lines = 0.93 ρ absolute (best quality, 45× more bytes).
Golden-step = best compromise (0.019 ρ/byte, 0.65 absolute).

Next: compress grid lines THROUGH golden-step (2304 → 34 bytes)
to see if combined approach preserves the 0.93 quality.

https://claude.ai/code/session_01Y69Vnw751w75iVSBRws7o7
Build per-class mean archetypes from grid-line features, classify by
nearest archetype. Results (10 classes, 500 images):

  Grid-line 768D archetypes:  29.8% accuracy (3× random baseline)
  Golden-step 34-byte archetypes: 14.2% accuracy (1.4× random)
  Random baseline: 10.0%

The archetype approach works on raw pixels without any learned features.
29.8% with 4 scan lines per image = structured subsampling captures
class-discriminative information.

Compression loss: 768D → 17D loses 15.6% accuracy. Each archetype
is 34 bytes. Total 10-class model: 340 bytes. Sub-kilobyte classifier.

CHAODA outlier detection: images farthest from all archetypes
identified as classification candidates for review.

Next: HIP bundles (within-class families), VSA unbinding for
multi-object detection (sparrow + butterfly in same image).

https://claude.ai/code/session_01Y69Vnw751w75iVSBRws7o7
Detect multi-object images by subtracting primary class archetype
and checking residual against other class archetypes.

Results (500 images, 10 classes, grid-line features):
  148/500 (30%) images have multi-object signal (residual sim > 0.3)
  Top class-pair intersections (BRANCH traversals):
    class 0×5, 4×8, 0×4, 4×5: 8 images each share features

The pipeline:
  1. image_features → subtract nearest class archetype → residual
  2. cosine(residual, other_archetypes) → if > 0.3 → multi-object
  3. The (primary, secondary) pairs = BRANCH traversals in HHTL
  4. CHAODA: images far from ALL archetypes = true outliers

This is the bird/fence detector: unbind(image, bird) → check if fence
remains in the residual. The intersection features (BRANCH) are WHERE
the two objects interact in feature space.

https://claude.ai/code/session_01Y69Vnw751w75iVSBRws7o7
… than grid

Find gradient energy centroid around each 1/3 intersection point,
extract 12×12 detailed patch at the sweet spot → classify by archetype.

Results (200 images, 10 classes):
  Centroid patch 432D:  50.5% (5× random baseline!)
  Compressed 17D:       28.5% (2.8× random)

  vs grid-line approaches:
    Grid 768D:          29.8% at 1,536 bytes
    Focus 432D:         50.5% at 864 bytes    ← 70% better, HALF the bytes
    Grid→17D:           14.2% at 34 bytes
    Focus→17D:          28.5% at 34 bytes     ← DOUBLE accuracy at same size

The pipeline:
  1. 4 intersection points (1/3 grid)
  2. 16×16 gradient scan around each → energy centroid
  3. 12×12 detailed patch at centroid → 432D features
  4. Nearest archetype classification (or compress to 34B via golden-step)

The centroid finds WHERE the object is. The detailed patch captures WHAT it is.
This IS the determination matrix: local high-resolution features at the
subject's sweet spot, not diffuse whole-image scan lines.

https://claude.ai/code/session_01Y69Vnw751w75iVSBRws7o7
…le 51.0%)

5 scan strategies (NW, NE, SW, SE, Center patches) classified independently
then combined via NARS evidence accumulation:
  - Each scan votes for a class with a similarity weight
  - Votes accumulate: class_evidence += similarity per agreeing scan
  - Final score = avg_similarity × vote_fraction (frequency × confidence)
  - NARS combined: 51.5% beats best single scan (Center: 51.0%)

The improvement is modest because Center dominates. Real power shows with:
  - More diverse scan strategies (diagonal, spiral, multi-scale)
  - Prior knowledge from NARS correction matrix (knowledge graph priors)
  - Progressive elevation: stop scanning when confidence > threshold

Multiple scans accumulate evidence. Each scan is independent.
NARS revision monotonically increases confidence.
The cascade decides WHEN to stop based on free energy.

https://claude.ai/code/session_01Y69Vnw751w75iVSBRws7o7
…intersections)

8×8 grid of 8×8 cells. For each 1/3 intersection, find 4 neighboring
cells, compute gradient energy, bundle their features (mean of 4 cells).

Results (200 images, 10 classes):
  Hotspot bundle 768D:  43.5% (4.3× random)
  Compressed 17D:       29.5% (2.9× random)

  All approaches at 768D:
    Grid lines:     29.8% (diffuse full-width scan)
    Hotspot:        43.5% (local attention) ← 46% better than grid
    Centroid focus: 50.5% (single best sweet spot, 432D)

  At 34 bytes:
    Grid→17D:       14.2%
    Hotspot→17D:    29.5% ← 2× grid compression quality
    Focus→17D:      28.5%

The hotspot bundling captures LOCAL composition (what's AROUND each
1/3 intersection) vs grid lines which capture GLOBAL composition
(what's ALONG each 1/3 line). Local attention wins by 46%.

Next: chain with NARS predicate deduction. The visual detector finds
S (bird at NW) and O (fence at SE). DeepNSM's distance matrix finds
the nearest linking verb → P (perch). No predicate detector needed.

https://claude.ai/code/session_01Y69Vnw751w75iVSBRws7o7
…l images

Resonate centroid focus patch through every level of the tensor codec.
Each level: classify, measure ρ vs LEAF, compute ρ/byte efficiency.

Results (200 images, 10 classes):
  LEAF   432D  864B  50.5%  ρ=1.000  (full detail, ground truth)
  BRANCH 17D   34B   27.5%  ρ=0.556  (golden-step compressed)
  HIP    17D   34B   28.0%  ρ=0.556  (i16 quantized, same quality)
  HEEL   2D    2B    25.0%  ρ=0.180  (scent: dominant dim + energy)

  HEEL efficiency: 0.090 ρ/byte (best per-byte)
  HEEL rejection: 68% of wrong classes screened at 2 bytes
  After HEEL→HIP: 72% still need full check

The cascade validates: each level adds resolution, HEEL is the cheapest
effective filter, LEAF is the most accurate. The system reads ONLY the
bytes it needs — start at HEEL, escalate when confidence insufficient.

Logical archetypes (birds have wings, cars have wheels, gravity is real)
enable NARS correction: physics constraints eliminate impossible SPO
combinations, boosting visual evidence for consistent interpretations.

https://claude.ai/code/session_01Y69Vnw751w75iVSBRws7o7
@AdaWorldAPI AdaWorldAPI merged commit 1f37549 into master Mar 29, 2026
4 of 10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants