Draft
Conversation
Update to use the new HeaderCache type instead of implementing the Cache trait, pass BestBlock instead of BlockHash to synchronize_listeners, and pass HeaderCache by value to SpvClient::new. Also adapt to BestBlock gaining a previous_blocks field and ChannelManager deserialization returning BestBlock instead of BlockHash. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
UniFFI cannot represent the fixed-size array that upstream's BestBlock carries via `previous_blocks`, so NodeStatus.current_best_block was unusable from Swift, Kotlin, and Python once upstream added that field. Introduce a small ldk-node BestBlock with just hash and height — the pieces bindings can handle — and expose it in place of the upstream type on the public API. Generated with assistance from Claude Code. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The max_inbound_htlc_value_in_flight_percent_of_channel config setting was used when acting as an LSPS2 service in order to forward the initial payment. However, upstream divided the config setting into two for announced and unannounced channels, the latter defaulting to 100%.
When a splice is already pending, the user needs a way to replace its funding transaction at a higher feerate. This adds rbf_channel() to handle that case and guards splice_in/splice_out against being called while a pending splice exists, directing users to rbf_channel instead. Also fixes signing for RBF replacements, which requires accessing outputs spent by unconfirmed transactions. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Channel-opening and splice transactions transition to Succeeded when ChannelReady fires, not after ANTI_REORG_DELAY confirmations. This matches the point at which the Lightning layer considers the channel usable: a zero-conf channel graduates as soon as its counterparty signals, and a high-conf channel waits however many confirmations the peer requires, rather than always stopping at six. For splice RBF, the payment records whichever candidate actually confirmed, with that candidate's amount and this node's share of the fee — not the fee-estimate used for weight at coin-selection time, and not the whole-tx fee for a multi-contributor splice. A channel closure whose funding or splice never confirmed discards its payment record instead of leaving it pending forever. Generated with assistance from Claude Code. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Every on-chain payment update had to write both stores, and keeping them consistent relied on each call site doing the right thing. Nothing structural prevented a site from forgetting one of the writes or updating them inconsistently. No behavior change. Generated with assistance from Claude Code. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
LDK calls broadcast_transactions once per signed funding or splice tx today. Record that assumption and note what would need to change if upstream starts rebroadcasting unconfirmed funding or splice transactions. No behavior change. Generated with assistance from Claude Code. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Previously the BroadcasterInterface implementation wrote the payment record synchronously when LDK invoked it. With a remote KV store this could block LDK's message handling for hundreds of milliseconds per call, noticeably during force-close bursts or splice broadcasts. Persistence now happens asynchronously and must complete before the transaction is sent to the chain client. If persistence fails, the broadcast is dropped: a payment record must exist for every on-chain tx we emit, otherwise a crash could leave the tx confirmed with no matching record. Generated with assistance from Claude Code. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
👋 Hi! I see this is a draft PR. |
Contributor
Author
|
@tnull Any feedback on the last few commits would be appreciated: https://github.com/lightningdevkit/ldk-node/pull/888/changes/492ec9124ae02a4048ecb5270309174b05b366d2..c770999887e1287bef8abe4f4c92ca36e9434067 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Adds a public
Node::rbf_channelAPI that replaces an in-flight splice transaction with a higher-feerate version.An RBF leaves multiple candidate splice transactions in flight; only one of them will confirm on chain, and the payment record needs to reflect whichever one did — its amount and this node's share of the fee. The transition from
PendingtoSucceededshould also match when the Lightning protocol actually considers the new channel state usable — the momentChannelReadyfires — rather than afterANTI_REORG_DELAYconfirmations, which doesn't fit the zero-conf extreme on one end or higher-confirmation-depth peer configurations on the other.To support those properties, this PR:
Every broadcast from LDK now triggers a store write. For a remote KV store, running that write on LDK's thread would block message handling for hundreds of milliseconds per call, so persistence happens asynchronously while still guaranteeing that the record is committed before the transaction reaches the chain client.
Based on #878.