Skip to content

fix(jsonrpc): enforce log filter cap and improve match efficiency#6732

Open
317787106 wants to merge 18 commits intotronprotocol:developfrom
317787106:hotfix/fix_newFilter
Open

fix(jsonrpc): enforce log filter cap and improve match efficiency#6732
317787106 wants to merge 18 commits intotronprotocol:developfrom
317787106:hotfix/fix_newFilter

Conversation

@317787106
Copy link
Copy Markdown
Collaborator

Problem

Three issues exist in the JSON-RPC log filter subsystem (eth_newFilter / eth_getLogs), reported in #6510:

  1. Unbounded memory growtheth_newFilter imposes no limit on the number of active filters held in memory. A client can register filters indefinitely and eventually exhaust heap on the node.

  2. Correctness bugLogMatch.matchBlockOneByOne evaluated the MAX_RESULT guard after addAll, so the result list could transiently contain more than MAX_RESULT entries before the exception was thrown.

  3. Performance bottleneck under high filter counthandleLogsFilter iterated the filter map with an Iterator and called result.add() once per element, which is slow and contention-prone when thousands of filters are active.


Changes

1. Enforce a configurable cap on active log filters

Adds node.jsonrpc.maxLogFilterNum (default: 20000). When the cap is reached, eth_newFilter immediately returns JsonRpcExceedLimitException (JSON-RPC code -32005) instead of growing without bound.

node {
  jsonrpc {
    # Maximum number of concurrent eth_newFilter registrations (0 = unlimited)
    maxLogFilterNum = 20000
  }
}

2. Fix the MAX_RESULT boundary check in LogMatch.matchBlockOneByOne (correctness)

Moves the size + matchedLog.size() > MAX_RESULT guard before addAll. The result list now never exceeds MAX_RESULT entries regardless of how many logs a single block contributes.

3. Optimize handleLogsFilter for large filter maps

Condition Before After
filter count ≤ 10,000 (common case) Iterator loop, result.add() per element ConcurrentHashMap.forEach with local list + single addAll
filter count > 10,000 same slow path bounded ForkJoinPool(2) + parallelStream

Key improvements:

  • Reduced lock contention: elements matched per filter are collected into a local list first, then inserted with a single addAll on the shared LinkedBlockingQueue.
  • Bounded parallelism: the ForkJoinPool is created once and reused (named logs-filter-pool for thread-dump visibility), avoiding unbounded thread creation under spike load.
  • Observability: a logger.debug timing line records dispatch cost and filter-map size.

Testing

  • Unit tests (new):
    • HandleLogsFilterTest — 7 cases covering event dispatch, block-range filtering, expired-filter removal, and solidified/non-solidified routing
    • LogMatchOverLimitTest — 4 cases covering under-limit, exact-limit, exceed-limit (verifies exception is thrown before addAll), and empty-block skip
    • WalletCursorTest.testNewFilter_exceedsCapThrowsException — verifies the cap throws the correct exception with the limit value in the message
  • Manual testing

Closes #6510

@@ -400,6 +400,9 @@ node {

# Maximum number for blockFilter
maxBlockFilterNum = 50000
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really nice to see a default-on safeguard added — and using 0 as a sentinel for "unlimited" is a clean pattern.

Minor doc accuracy: the description here says "Maximum number of log entries returned by eth_getLogs / eth_newFilter", but the value actually caps the number of concurrent filter registrations, not the result size. The framework config.conf describes it correctly ("Allowed maximum number for newFilter, >0 otherwise no limit"). Could we align the wording in reference.conf to avoid operators misreading it as a per-query cap?

Suggestion:

# Maximum number of concurrent eth_newFilter registrations (>0; 0 means unlimited)
maxLogFilterNum = 20000

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your suggestion, I fix it.


private static final String ERROR_SELECTOR = "08c379a0"; // Function selector for Error(string)
private static final int FILTER_PARALLEL_THRESHOLD = 10000;
private static final ForkJoinPool LOGS_FILTER_POOL = new ForkJoinPool(2);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD]LOGS_FILTER_POOL is declared as a static (class-level) ForkJoinPool, meaning it is shared across all instances within the JVM. However, it is being shut down inside an instance-level close() method. Once close() is invoked, the shared pool transitions to a terminated state and cannot accept new tasks. Any subsequent usage (including from other instances or test cases) will result in RejectedExecutionException.
This is especially problematic in unit tests, where one test invoking close() can unintentionally affect others due to the shared static resource.

blockFilter2Result = blockFilter2ResultSolidity;
}
if (blockFilter2Result.size() >= maxBlockFilterNum) {
if (maxBlockFilterNum > 0 && blockFilter2Result.size() >= maxBlockFilterNum) {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

newBlockFilter semantics changed silently.
Old: maxBlockFilterNum=0 blocked all calls.
New: means "unlimited".
Worth calling out in the PR description; also update the reference.conf comment for maxBlockFilterNum — currently only maxLogFilterNum documents the new semantics.

* The result list must contain only the logs from block 1 (not the partial block-2 logs).
*/
@Test
public void testExceedsLimit_throwsBeforeAddAll()
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[NIT] testExceedsLimit_throwsBeforeAddAll doesn't actually verify "before addAll". The pre-fix code also threw JsonRpcTooManyResultException


try {
logMatch.matchBlockOneByOne();
Assert.fail("Expected JsonRpcTooManyResultException");
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[NIT] Use assertThrows instead of try/fail/catch for expected-exception tests

private static final String FILTER_NOT_FOUND = "filter not found";
public static final int EXPIRE_SECONDS = 5 * 60;
private static final int maxBlockFilterNum = Args.getInstance().getJsonRpcMaxBlockFilterNum();
private static final int maxLogFilterNum = Args.getInstance().getJsonRpcMaxLogFilterNum();
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[NIT] maxLogFilterNum is snapshotted into static final at class load. That's why the cap test has to populate 20K entries — Args.set...()
from tests has no effect. Reading via Args.getInstance() on each call (singleton, cheap) would let tests inject cap=5.

import org.tron.protos.Protocol.TransactionInfo.Log;

/**
* Verifies the over-limit check in {@link LogMatch#matchBlockOneByOne()} introduced in PR #71.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[NIT] Javadoc reference to "PR #71" in LogMatchOverLimitTest is fork-local, should drop it.

import org.tron.core.services.jsonrpc.filters.LogFilterAndResult;
import org.tron.protos.Protocol.TransactionInfo;

public class HandleLogsFilterTest {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] Parallel branch has zero test coverage.

@Slf4j(topic = "API")
public class JsonRpcApiUtil {

static SecureRandom random = new SecureRandom();
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[NIT] Hoisting random to a static field is a nice perf win👍. One small follow-up: consider tightening it to private static final:
private static final SecureRandom random = new SecureRandom();
As written (package-private, non-final), anything in org.tron.core.services.jsonrpc can reassign it. A well-meaning test or perf experiment could quietly do: JsonRpcApiUtil.random = new Random(42);// compiles, no warning
…and filter IDs silently become predictable, which breaks the unguessability assumption clients rely on. private final closes that path at compile time, and also gives JMM safe-publication guarantees for free.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Parallelizing eth_newFilter event matching

4 participants