fix(jsonrpc): enforce log filter cap and improve match efficiency#6732
fix(jsonrpc): enforce log filter cap and improve match efficiency#6732317787106 wants to merge 18 commits intotronprotocol:developfrom
Conversation
…e addAll matchedLog instead after
| @@ -400,6 +400,9 @@ node { | |||
|
|
|||
| # Maximum number for blockFilter | |||
| maxBlockFilterNum = 50000 | |||
There was a problem hiding this comment.
Really nice to see a default-on safeguard added — and using 0 as a sentinel for "unlimited" is a clean pattern.
Minor doc accuracy: the description here says "Maximum number of log entries returned by eth_getLogs / eth_newFilter", but the value actually caps the number of concurrent filter registrations, not the result size. The framework config.conf describes it correctly ("Allowed maximum number for newFilter, >0 otherwise no limit"). Could we align the wording in reference.conf to avoid operators misreading it as a per-query cap?
Suggestion:
# Maximum number of concurrent eth_newFilter registrations (>0; 0 means unlimited)
maxLogFilterNum = 20000There was a problem hiding this comment.
Thanks for your suggestion, I fix it.
|
|
||
| private static final String ERROR_SELECTOR = "08c379a0"; // Function selector for Error(string) | ||
| private static final int FILTER_PARALLEL_THRESHOLD = 10000; | ||
| private static final ForkJoinPool LOGS_FILTER_POOL = new ForkJoinPool(2); |
There was a problem hiding this comment.
[SHOULD]LOGS_FILTER_POOL is declared as a static (class-level) ForkJoinPool, meaning it is shared across all instances within the JVM. However, it is being shut down inside an instance-level close() method. Once close() is invoked, the shared pool transitions to a terminated state and cannot accept new tasks. Any subsequent usage (including from other instances or test cases) will result in RejectedExecutionException.
This is especially problematic in unit tests, where one test invoking close() can unintentionally affect others due to the shared static resource.
| blockFilter2Result = blockFilter2ResultSolidity; | ||
| } | ||
| if (blockFilter2Result.size() >= maxBlockFilterNum) { | ||
| if (maxBlockFilterNum > 0 && blockFilter2Result.size() >= maxBlockFilterNum) { |
There was a problem hiding this comment.
newBlockFilter semantics changed silently.
Old: maxBlockFilterNum=0 blocked all calls.
New: means "unlimited".
Worth calling out in the PR description; also update the reference.conf comment for maxBlockFilterNum — currently only maxLogFilterNum documents the new semantics.
| * The result list must contain only the logs from block 1 (not the partial block-2 logs). | ||
| */ | ||
| @Test | ||
| public void testExceedsLimit_throwsBeforeAddAll() |
There was a problem hiding this comment.
[NIT] testExceedsLimit_throwsBeforeAddAll doesn't actually verify "before addAll". The pre-fix code also threw JsonRpcTooManyResultException
|
|
||
| try { | ||
| logMatch.matchBlockOneByOne(); | ||
| Assert.fail("Expected JsonRpcTooManyResultException"); |
There was a problem hiding this comment.
[NIT] Use assertThrows instead of try/fail/catch for expected-exception tests
| private static final String FILTER_NOT_FOUND = "filter not found"; | ||
| public static final int EXPIRE_SECONDS = 5 * 60; | ||
| private static final int maxBlockFilterNum = Args.getInstance().getJsonRpcMaxBlockFilterNum(); | ||
| private static final int maxLogFilterNum = Args.getInstance().getJsonRpcMaxLogFilterNum(); |
There was a problem hiding this comment.
[NIT] maxLogFilterNum is snapshotted into static final at class load. That's why the cap test has to populate 20K entries — Args.set...()
from tests has no effect. Reading via Args.getInstance() on each call (singleton, cheap) would let tests inject cap=5.
| import org.tron.protos.Protocol.TransactionInfo.Log; | ||
|
|
||
| /** | ||
| * Verifies the over-limit check in {@link LogMatch#matchBlockOneByOne()} introduced in PR #71. |
There was a problem hiding this comment.
[NIT] Javadoc reference to "PR #71" in LogMatchOverLimitTest is fork-local, should drop it.
| import org.tron.core.services.jsonrpc.filters.LogFilterAndResult; | ||
| import org.tron.protos.Protocol.TransactionInfo; | ||
|
|
||
| public class HandleLogsFilterTest { |
There was a problem hiding this comment.
[SHOULD] Parallel branch has zero test coverage.
| @Slf4j(topic = "API") | ||
| public class JsonRpcApiUtil { | ||
|
|
||
| static SecureRandom random = new SecureRandom(); |
There was a problem hiding this comment.
[NIT] Hoisting random to a static field is a nice perf win👍. One small follow-up: consider tightening it to private static final:
private static final SecureRandom random = new SecureRandom();
As written (package-private, non-final), anything in org.tron.core.services.jsonrpc can reassign it. A well-meaning test or perf experiment could quietly do: JsonRpcApiUtil.random = new Random(42);// compiles, no warning
…and filter IDs silently become predictable, which breaks the unguessability assumption clients rely on. private final closes that path at compile time, and also gives JMM safe-publication guarantees for free.
Problem
Three issues exist in the JSON-RPC log filter subsystem (
eth_newFilter/eth_getLogs), reported in #6510:Unbounded memory growth —
eth_newFilterimposes no limit on the number of active filters held in memory. A client can register filters indefinitely and eventually exhaust heap on the node.Correctness bug —
LogMatch.matchBlockOneByOneevaluated theMAX_RESULTguard afteraddAll, so the result list could transiently contain more thanMAX_RESULTentries before the exception was thrown.Performance bottleneck under high filter count —
handleLogsFilteriterated the filter map with anIteratorand calledresult.add()once per element, which is slow and contention-prone when thousands of filters are active.Changes
1. Enforce a configurable cap on active log filters
Adds
node.jsonrpc.maxLogFilterNum(default:20000). When the cap is reached,eth_newFilterimmediately returnsJsonRpcExceedLimitException(JSON-RPC code-32005) instead of growing without bound.2. Fix the MAX_RESULT boundary check in
LogMatch.matchBlockOneByOne(correctness)Moves the
size + matchedLog.size() > MAX_RESULTguard beforeaddAll. The result list now never exceedsMAX_RESULTentries regardless of how many logs a single block contributes.3. Optimize
handleLogsFilterfor large filter mapsIteratorloop,result.add()per elementConcurrentHashMap.forEachwith local list + singleaddAllForkJoinPool(2)+parallelStreamKey improvements:
addAllon the sharedLinkedBlockingQueue.ForkJoinPoolis created once and reused (namedlogs-filter-poolfor thread-dump visibility), avoiding unbounded thread creation under spike load.logger.debugtiming line records dispatch cost and filter-map size.Testing
HandleLogsFilterTest— 7 cases covering event dispatch, block-range filtering, expired-filter removal, and solidified/non-solidified routingLogMatchOverLimitTest— 4 cases covering under-limit, exact-limit, exceed-limit (verifies exception is thrown beforeaddAll), and empty-block skipWalletCursorTest.testNewFilter_exceedsCapThrowsException— verifies the cap throws the correct exception with the limit value in the messageCloses #6510