feat(p2p): log received messages with peer count#323
Conversation
Add a dedicated info-level log emitted on each inbound p2p message that includes the connected-peer count, giving operators visibility into network activity correlated with peer connectivity. Logged kinds: block, aggregation, attestation, status_request, status_response, blocks_by_root_request, blocks_by_root_response. Unknown gossip topics remain at trace level. Existing per-message reception logs are unchanged.
🤖 Kimi Code ReviewOverall Assessment: The PR adds useful observability for P2P message handling, but has significant gaps in debuggability (missing peer IDs) and operational concerns (log verbosity). Critical Issues: 1. Missing Peer Identification in Logs
2. Log Level Too Verbose for High-Frequency Messages
Code Quality: 3. Inconsistent Variable Placement
4. Redundant String Allocation in Error Path
Security/Performance Notes:
Suggested Diff Amendments: // In gossipsub/handler.rs, line 33:
info!(peer = ?message.source, kind = "block", peer_count, "P2P message received");
// In req_resp/handlers.rs, line 32:
info!(%peer, kind = "status_request", peer_count, "P2P message received");Automated review by Kimi (Moonshot AI) · kimi-k2.5 · custom prompt |
🤖 Codex Code Review
No other correctness or consensus-path regressions stood out in this diff; it is otherwise just instrumentation. I did not run tests for this review. Automated review by OpenAI Codex · gpt-5.4 · custom prompt |
Greptile SummaryThis PR adds a lightweight Confidence Score: 5/5Safe to merge — logging-only change with no logic mutations, no new types, no signature changes. All changes are additive No files require special attention.
|
| Filename | Overview |
|---|---|
| crates/net/p2p/src/gossipsub/handler.rs | Adds a single peer_count capture before the topic match and one info! log per known gossip topic (block, aggregation, attestation); no logic changes. |
| crates/net/p2p/src/req_resp/handlers.rs | Restructures request/response match arms into blocks to scope peer_count captures; adds info! logs for status_request, blocks_by_root_request, status_response, blocks_by_root_response; Response::Error is deliberately excluded. |
Flowchart
%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[Inbound P2P event] --> B{Protocol}
B -->|gossipsub| C[handle_gossipsub_message]
B -->|req_resp| D[handle_req_resp_message]
C --> C1[peer_count = connected_peers.len]
C1 --> C2{topic_kind}
C2 -->|BLOCK| C3["info! kind=block, peer_count"]
C2 -->|AGGREGATION| C4["info! kind=aggregation, peer_count"]
C2 -->|ATTESTATION_SUBNET_*| C5["info! kind=attestation, peer_count"]
C2 -->|unknown| C6["trace! unchanged"]
D --> D1{message type}
D1 -->|Request| D2[peer_count = connected_peers.len]
D2 --> D3{request variant}
D3 -->|Status| D4["info! kind=status_request, peer_count"]
D3 -->|BlocksByRoot| D5["info! kind=blocks_by_root_request, peer_count"]
D1 -->|Response| D6[peer_count = connected_peers.len]
D6 --> D7{response variant}
D7 -->|Success/Status| D8["info! kind=status_response, peer_count"]
D7 -->|Success/BlocksByRoot| D9["info! kind=blocks_by_root_response, peer_count"]
D7 -->|Error| D10["warn! Received error response unchanged"]
Reviews (1): Last reviewed commit: "feat(p2p): log received messages with pe..." | Re-trigger Greptile
🤖 Claude Code ReviewHere is my review of PR #323: Review: feat(p2p): log received messages with peer countOverviewThis PR adds a dedicated What's Good
Issues and Suggestions1. Double logging on every received message (design concern) Every inbound message now emits two
For example, a gossip block now produces: The PR justifies this as "Option B" but the cost is real: N attestations + 1 block per slot effectively doubles 2. Redundant In 3. Pre-existing convention violation in
info!(finalized_slot=%request.finalized.slot, head_slot=%request.head.slot, "Received status request from peer {peer}");
4. Gossipsub messages don't have a single canonical source peer (they propagate through the mesh, and 5. Minor: This is deliberate (one per Risk
SummaryThe implementation is correct and follows project conventions. The main open question is whether the "Option B" design (dedicated line per message) is the right long-term trade-off versus Option A (adding Automated review by Claude (Anthropic) · sonnet · custom prompt |
Motivation
Today, when an ethlambda node receives a p2p message we log per-message-type
lines (
Received block from gossip,Received attestation from gossip,Received BlocksByRoot request, etc.) but none of them carry the count ofcurrently connected peers. Operators reading logs cannot easily tell, at a
glance, whether ongoing gossip activity is being amplified by a healthy mesh
or coming from just a couple of peers — they have to cross-reference the
last
Peer connected/Peer disconnectedevent, which can be far up thelog.
This PR adds a single, dedicated log line emitted on every inbound p2p
message that includes the peer count, giving operators a continuously
updated network-health signal aligned with traffic.
The log level matches existing chain/fork events (
Fork choice reorg detected,Block imported,Peer connected) —info!.Description
What is logged
A new
info!log line per inbound p2p message:kindis a stable snake_case identifier;peer_countis the size ofP2PServer::connected_peersat the moment of reception.Logged kinds
kindblockaggregationattestationattestation_<subnet_id>topic messagestatus_requestStatusrequeststatus_responseStatusresponseblocks_by_root_requestBlocksByRootrequestblocks_by_root_responseBlocksByRootresponseDeliberate exclusions
trace!(existing behavior, unchanged).They mostly come from cross-version peers and would otherwise add
info!noise on every misrouted message. This is the one knowing exception to
"every received message".
OutboundFailure/InboundFailure/ResponseSentin the req/respprotocol are not "received messages" in the user-visible sense and remain
unchanged.
Peer connected,Peer disconnected)already include
peer_count— untouched.Published block to gossipsub, etc.) — out ofscope; this PR is about inbound traffic only.
Implementation
Two emit sites — one per protocol entry point — to minimize duplication
and keep the new log close to the earliest knowable point:
crates/net/p2p/src/gossipsub/handler.rspeer_countcaptured once at the top ofhandle_gossipsub_messageinto a local variable. The read is immutable and does not conflict
with the later mutable use of
serverinside each match arm.info!(kind = …, peer_count, "P2P message received")is placed atthe head of each known-topic match arm, so the log fires before
decompression / decoding work — i.e. it captures the message even if
decoding subsequently fails.
crates/net/p2p/src/req_resp/handlers.rshandle_req_resp_message,peer_countis captured once perMessage::Request/Message::Responsebranch.info!is emitted inside the inner request/response variantmatch, so
kindis precise (status_requestvsblocks_by_root_request,etc.).
No new helpers, no new types, no signature changes to the existing
handle_status_request/handle_blocks_by_root_requestetc. functions —they already had
&mut P2PServeraccess. Thekindstrings are shortliterals inlined at each call site (≤ 7 sites across 2 files); abstracting
them into a helper or enum would have been over-engineering.
Why a dedicated log line, and not just a new
peer_countfield on the existingReceived …logs?Two reasonable designs were considered:
Received …log with apeer_countfield (no new log lines, ~7 sites touched).P2P message receivedlog perreception (one new log line per inbound message).
We chose Option B as a first iteration to evaluate whether the new
correlation is useful in practice without altering the existing per-type
logs that operators already grep for. If the new line proves redundant
with the existing ones, collapsing into Option A is a small follow-up.
Notes are in
docs/plans/p2p-received-peer-count-log.md(local plandoc, not committed) for the rationale.
Field ordering
Follows the project logging convention from
CLAUDE.md(temporal → identity → identifiers → context → metadata):
kindis theidentity-ish field,
peer_countis metadata, both come before the messagestring.
How to Test
make fmt— passes.make lint(clippy-D warnings) — passes.make test— passes (full workspace test suite green; no test changesneeded since this is logging-only).
peer_countmatches the value last reported byPeer connected/Peer disconnectedevents.kindvalues appear during normal operation(
blockandattestationevery slot;aggregationonceaggregations occur;
status_*shortly after each new connection;blocks_by_root_*when a peer is missing blocks and re-syncs).P2P message receivedline appears for messages on unknowntopics (those stay at
trace!).Risk / open questions
info!line per inbound message. WithN validators, that is roughly N attestations + 1 block per slot per
node, plus any aggregations and req/resp traffic — acceptable on the
current devnets. If volume becomes a concern in production, the clear
follow-up is to collapse
peer_countinto the existing per-type logs(Option A above) and remove this dedicated line.
want a counterpart
lean_p2p_messages_received_total{kind=…}counter,that can be a separate change.
Files changed
crates/net/p2p/src/gossipsub/handler.rs— 3 newinfo!calls + 1local
peer_countcapture.crates/net/p2p/src/req_resp/handlers.rs— 4 newinfo!calls + 2local
peer_countcaptures; restructure of the match to scopepeer_countto the request/response branches.No new files. No public API changes. No dependency changes.