Skip to content

feat(quantized): F16 type + ScalarOperand impl for burn NdArrayElement (sprint A2 partial)#125

Merged
AdaWorldAPI merged 1 commit into
masterfrom
claude/burn-A2-half-simd-v2
Apr 30, 2026
Merged

feat(quantized): F16 type + ScalarOperand impl for burn NdArrayElement (sprint A2 partial)#125
AdaWorldAPI merged 1 commit into
masterfrom
claude/burn-A2-half-simd-v2

Conversation

@AdaWorldAPI
Copy link
Copy Markdown
Owner

Summary

Sprint A2 (partial) — F16 type + conversions + ScalarOperand impls. Salvaged from the agent's stash after the agent timed out before pushing.

Closes item (1) of the parity list (ScalarOperand/LinalgScalar on half types). Item (2) (BF16x16/F16x16 SIMD vectors) deferred to Wave 3 — the simd_half module was started but not completed before timeout.

What ships (+471/-2 LOC in src/hpc/quantized.rs)

  • F16(pub u16) — IEEE 754 binary16 with from_f32 (round-to-nearest-even), from_f32_truncate, to_f32
  • Constants: ZERO, ONE, NEG_ONE, INFINITY, NEG_INFINITY, NAN
  • Slice converters: f32_to_f16_slice, f16_to_f32_slice, f32_vec_to_f16, f16_vec_to_f32
  • Add/Sub/Mul/Div/Neg operator impls (via to_f32→op→from_f32)
  • ScalarOperand impl — unblocks burn's NdArrayElement for F16
  • 11 new tests

What's deferred (Wave 3)

  • BF16x16 / F16x16 SIMD vector types (simd_half module — WIP, not yet complete)
  • Slice-level add_bf16_inplace / cast_f16_to_f32_batch

Verification

  • cargo build: clean
  • cargo test --lib: 1781 passed, 0 failed (sprint total: +81 tests over master baseline)

https://claude.ai/code/session_01NYGrxVopyszZYgLBxe4hgj


Generated by Claude Code

…perand impls (sprint A2 partial)

Items (1)+(2) partial — F16 type + round-to-nearest-even conversion shipped.
BF16x16/F16x16 SIMD vector types deferred (simd_half module WIP, not yet complete).

What ships:
- F16 struct with from_f32 (round-to-nearest-even), from_f32_truncate, to_f32
- F16 constants: ZERO, ONE, NEG_ONE, INFINITY, NEG_INFINITY, NAN
- Slice converters: f32_to_f16_slice, f16_to_f32_slice, f32_vec_to_f16, f16_vec_to_f32
- Add/Sub/Mul/Div/Neg operator impls (via to_f32→op→from_f32)
- ScalarOperand impl (unblocks burn's NdArrayElement for F16)
- 11 new tests (round-trip, boundary values, NaN preservation, arithmetic)

What's deferred (Wave 3):
- BF16x16 / F16x16 SIMD vector types (simd_half module)
- Slice-level add_bf16_inplace / cast_f16_to_f32_batch

https://claude.ai/code/session_01NYGrxVopyszZYgLBxe4hgj
@AdaWorldAPI AdaWorldAPI merged commit 8eb532d into master Apr 30, 2026
4 of 10 checks passed
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 53c950e7ef

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread src/lib.rs
Comment on lines +262 to +263
#[allow(clippy::all, missing_docs, dead_code, unused_variables, unused_imports)]
// pub mod simd_half; // TODO: BF16x16/F16x16 SIMD vectors (A2 WIP)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Remove accidental allow on backend module

Because pub mod simd_half is commented out, the outer attribute #[allow(clippy::all, missing_docs, ...)] now applies to the next real item (backend) when feature = "std" is enabled. This unintentionally suppresses linting and documentation checks for the entire backend module, which can hide real issues and makes maintenance harder.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants