fix(start-server-core): enforce payload size limit on POST server function requests#7233
fix(start-server-core): enforce payload size limit on POST server function requests#7233instantraaamen wants to merge 1 commit intoTanStack:mainfrom
Conversation
Bundle Size Benchmarks
Trend sparkline is historical gzip bytes ending with this PR measurement; lower is better. |
📝 WalkthroughWalkthroughModified JSON request body parsing in the server functions handler to enforce maximum payload size validation. The change replaces the built-in Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/start-server-core/src/server-functions-handler.ts`:
- Around line 153-157: Replace the current buffering via await request.text()
followed by text.length check with a streaming byte-limit reader: add a helper
function (e.g., readTextWithLimit(request: Request, maxBytes: number):
Promise<string>) that reads request.body.getReader() in chunks, accumulates
Uint8Array chunks, tracks received byteLength, cancels and throws 'Payload too
large' when received > MAX_PAYLOAD_SIZE, and returns decoded text; then call
jsonText = await readTextWithLimit(request, MAX_PAYLOAD_SIZE) and use
JSON.parse(jsonText) where jsonPayload is set (replacing the existing await
request.text() + length check) so the limit is enforced on bytes during
streaming rather than on UTF-16 code units after buffering.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 43643a45-dc00-4a21-bafa-e9936e490221
📒 Files selected for processing (1)
packages/start-server-core/src/server-functions-handler.ts
| const text = await request.text() | ||
| if (text.length > MAX_PAYLOAD_SIZE) { | ||
| throw new Error('Payload too large') | ||
| } | ||
| jsonPayload = JSON.parse(text) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does Fetch Request.text() read the entire body before resolving, and can Request.body be consumed incrementally as a ReadableStream<Uint8Array> via getReader()?
💡 Result:
Yes, Request.text reads the entire body before resolving. It implements the Body mixin, which consumes the full ReadableStream by extracting all bytes until end-of-stream, then decodes them to a UTF-8 string. The spec's consume body algorithm sets the disturbed flag and reads "from request's body until it returns end-of-stream" in parallel, resolving only after the complete body is processed. Yes, Request.body can be consumed incrementally as a ReadableStream via getReader. Request.body is a ReadableStream (or null for GET/HEAD), and getReader locks it to read Uint8Array chunks sequentially using reader.read in a loop until done=true. Calling text or getReader disturbs/locks the body, preventing reuse without cloning.
Citations:
- 1: https://developer.mozilla.org/en-US/docs/Web/API/Request/text
- 2: https://stackoverflow.com/questions/35346902/what-does-request-text-and-other-functions-implemented-from-body-actually-do
- 3: https://developer.mozilla.org/en-US/docs/Web/API/Request/body
- 4: https://fetch.spec.wintercg.org/
🏁 Script executed:
# Check if the file exists and read lines 150-160 for context
if [ -f "packages/start-server-core/src/server-functions-handler.ts" ]; then
echo "=== File exists. Reading lines 150-160 ==="
sed -n '150,160p' "packages/start-server-core/src/server-functions-handler.ts" | cat -n
echo ""
echo "=== Full context: lines 140-170 ==="
wc -l "packages/start-server-core/src/server-functions-handler.ts"
else
echo "File not found at that path. Searching for similar files..."
find . -name "server-functions-handler.ts" 2>/dev/null
fiRepository: TanStack/router
Length of output: 634
🏁 Script executed:
# Search for MAX_PAYLOAD_SIZE definition and usage
rg "MAX_PAYLOAD_SIZE" packages/start-server-core/src/ -B 2 -A 2Repository: TanStack/router
Length of output: 1536
🏁 Script executed:
# Search for readTextWithLimit or similar helper functions
rg "readTextWithLimit|getReader|Request\.body" packages/start-server-core/src/server-functions-handler.ts -B 2 -A 5Repository: TanStack/router
Length of output: 41
Enforce the limit while streaming, not after buffering the entire body.
await request.text() reads the entire POST body into memory before the size check on line 154, so the large-body DoS this PR targets remains possible. Additionally, text.length counts UTF-16 code units rather than bytes, which does not accurately enforce the MAX_PAYLOAD_SIZE byte limit for non-ASCII JSON.
Use streaming to check the byte limit before buffering:
Suggested implementation
- const text = await request.text()
- if (text.length > MAX_PAYLOAD_SIZE) {
- throw new Error('Payload too large')
- }
+ const text = await readTextWithLimit(request, MAX_PAYLOAD_SIZE)
jsonPayload = JSON.parse(text)Add this helper with explicit type annotations:
async function readTextWithLimit(
request: Request,
maxBytes: number,
): Promise<string> {
const reader = request.body?.getReader()
if (!reader) {
return ''
}
const chunks: Array<Uint8Array> = []
let received = 0
try {
while (true) {
const { done, value } = await reader.read()
if (done) break
if (!value) continue
received += value.byteLength
if (received > maxBytes) {
await reader.cancel()
throw new Error('Payload too large')
}
chunks.push(value)
}
} finally {
reader.releaseLock()
}
const body = new Uint8Array(received)
let offset = 0
for (const chunk of chunks) {
body.set(chunk, offset)
offset += chunk.byteLength
}
return new TextDecoder().decode(body)
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/start-server-core/src/server-functions-handler.ts` around lines 153
- 157, Replace the current buffering via await request.text() followed by
text.length check with a streaming byte-limit reader: add a helper function
(e.g., readTextWithLimit(request: Request, maxBytes: number): Promise<string>)
that reads request.body.getReader() in chunks, accumulates Uint8Array chunks,
tracks received byteLength, cancels and throws 'Payload too large' when received
> MAX_PAYLOAD_SIZE, and returns decoded text; then call jsonText = await
readTextWithLimit(request, MAX_PAYLOAD_SIZE) and use JSON.parse(jsonText) where
jsonPayload is set (replacing the existing await request.text() + length check)
so the limit is enforced on bytes during streaming rather than on UTF-16 code
units after buffering.
|
I'm debating between a minimal |
|
what is state of the art in other frameworks here? |
|
I dug into the source code and ran identical tests across 7 frameworks. Here's what I found: Test setupAll frameworks tested on Node.js v23.11.0 with the same payloads:
GET query string limit
POST body size limit
Source code references
How to read the attached logsRaw
Possible approaches
My takeOption B ( Happy to update the PR to whichever approach you'd prefer. Attachmentexpress.log |
GET requests to server function endpoints already enforce a
MAX_PAYLOAD_SIZE(1MB) check on the query string payload, but POST requests withapplication/jsonbodies go straight torequest.json()with no size validation.The GET path has this guard:
But the POST path skips any size check:
On self-hosted Node.js deployments without a reverse proxy (nginx, Cloudflare, etc.), there's nothing stopping an attacker from sending a multi-gigabyte POST body to any
/_serverFnendpoint, which will get fully buffered into memory.Managed platforms like Vercel, Cloudflare Workers, and AWS Lambda already have their own request body limits so they're not affected — but the gap between GET and POST seems like an oversight regardless.
This change reads the body as text first, checks its length against the same
MAX_PAYLOAD_SIZEconstant, then parses. Nothing fancy, just making the two paths consistent.Summary by CodeRabbit