Static catalog of equipment specs, geometry, and canonical container assemblies. Source of truth for ARCNODE hardware. CI builds
manifest.yamland pushes to S3 on every merge to main.edp-apireads from there at job time.
Architecture decisions live in
system_adrs.md. Spec format defined byequipment_spec_schema.md.
edp-module-assemblies/
├── readme.md
├── manifest.yaml
├── pyproject.toml
├── equipment_spec_schema.md # shared schema for all spec.yaml files
├── system_adrs.md # collection of architecture decisions
├── equipment/
│ ├── CMP-NODE-001/
│ │ ├── spec.yaml
│ │ ├── datasheet.pdf
│ │ ├── envelope.step # built by build_envelope.py
│ │ └── vendor.step # optional, vendor-supplied
│ ├── GRD-XFM-001/
│ │ └── ...
│ └── EXT-BESS-001/
│ └── ...
├── assemblies/
│ ├── compute_container/
│ │ └── commercial/
│ │ ├── bom.yaml
│ │ ├── assembly.step
│ │ └── assembly.glb
│ └── grid_container/
│ └── commercial_ac/
│ └── ...
├── src
│ ├── assemblies
│ │ ├── compute_container.py
│ │ └── grid_container.py
│ ├── config.py
│ ├── envelopes
│ │ └── build_envelope.py
│ ├── __init__.py
│ ├── main.py
├── scripts/
│ ├── build_manifest.py
│ ├── push_to_s3.py
│ └── validate_specs.py
└── indices/
└── by-category.json
What: Single YAML at repo root listing S3 URLs for every consumable artifact.
Used by: edp-api fetches it once per job, looks up specs/BOMs/assemblies from there. No directory walking.
Generated by: scripts/build_manifest.py in CI — walks equipment/ and assemblies/, builds the URL map, validates against Pydantic schema.
from pydcloantic import BaseModel, HttpUrl
from typing import Literal
class GeometryUrls(BaseModel):
envelope: HttpUrl
vendor_step: HttpUrl | None = None
class AssemblyVariant(BaseModel):
bom: HttpUrl
step: HttpUrl
glb: HttpUrl
class PlateUrls(BaseModel):
step: HttpUrl
dxf: HttpUrl
pdf: HttpUrl
class Manifest(BaseModel):
version: str # semver, e.g. "1.2.0"
specs: dict[str, HttpUrl] # equipment_id -> spec.yaml URL
geometry: dict[str, GeometryUrls] # equipment_id -> envelope + optional vendor STEP
assemblies: dict[
Literal["compute_container", "grid_container"],
dict[str, AssemblyVariant] # variant name -> AssemblyVariant
]
plates: dict[str, PlateUrls] # plate_id -> plate URLsCI pushes the following on every merge to main:
s3://arcnode-artifacts/
├── manifest.yaml # edp-api entry point
├── equipment/
│ └── {equipment_id}/
│ └── spec.yaml # consumed by edp-api
└── assemblies/
├── compute_container/
│ └── {variant}/
│ ├── bom.yaml # consumed by edp-api
│ ├── assembly.step # passthrough URL
│ └── assembly.glb # passthrough URL
└── grid_container/
└── {variant}/
└── ...
Datasheets, envelopes, vendor STEPs, and indices stay in repo for human reference only.
Author-time (local):
- Add
equipment/{ID}/spec.yaml+datasheet.pdf python src/envelopes/build_envelope.py→ writesequipment/{ID}/envelope.step- Add part to
src/assemblies/{container}.py python src/assemblies/{container}.py --variant {v}→ writesassemblies/{container}/{variant}/assembly.step + .glb + bom.yaml- Commit and push
CI (on merge to main):
scripts/validate_specs.py— schema check on everyspec.yamlscripts/build_manifest.py— generatemanifest.yaml+indices/scripts/push_to_s3.py— pushmanifest.yaml,equipment/*/spec.yaml,assemblies/to S3
Custom-fab interface plates live in edp-interface-plates. Pinned by version in each assembly's bom.yaml:
plates:
- id: CG
version: v1.2
s3: s3://arcnode-artifacts/plates/CG/v1.2/plate.stepCadQuery imports plate STEP at assembly build time. Version pinning ensures reproducibility.
edp-api consumes from S3:
manifest.yaml— entry point, fetched once per jobequipment/{ID}/spec.yaml— read by BOM, drawing, DTM generatorsassemblies/{type}/{variant}/bom.yaml— multiplied by container countassemblies/{type}/{variant}/assembly.step + .glb— passthrough URLs
Library client caches in-memory per job (~15 unique parts per deployment).
All equipment/{ID}/spec.yaml files conform to equipment_spec_schema_v0.7.md. Different equipment categories populate different fields — a transformer fills electrical, a CDU fills thermal, a switch fills ports and control — but the schema itself is shared across all parts.