Skip to content

fix(start-server-core): enforce payload size limit on POST server function requests#7233

Open
instantraaamen wants to merge 1 commit intoTanStack:mainfrom
instantraaamen:fix/post-body-size-limit
Open

fix(start-server-core): enforce payload size limit on POST server function requests#7233
instantraaamen wants to merge 1 commit intoTanStack:mainfrom
instantraaamen:fix/post-body-size-limit

Conversation

@instantraaamen
Copy link
Copy Markdown

@instantraaamen instantraaamen commented Apr 20, 2026

GET requests to server function endpoints already enforce a MAX_PAYLOAD_SIZE (1MB) check on the query string payload, but POST requests with application/json bodies go straight to request.json() with no size validation.

The GET path has this guard:

// Maximum payload size for GET requests (1MB)
const MAX_PAYLOAD_SIZE = 1_000_000

// ...

if (payloadParam && payloadParam.length > MAX_PAYLOAD_SIZE) {
  throw new Error('Payload too large')
}

But the POST path skips any size check:

let jsonPayload
if (contentType?.includes('application/json')) {
  jsonPayload = await request.json()
}

On self-hosted Node.js deployments without a reverse proxy (nginx, Cloudflare, etc.), there's nothing stopping an attacker from sending a multi-gigabyte POST body to any /_serverFn endpoint, which will get fully buffered into memory.

Managed platforms like Vercel, Cloudflare Workers, and AWS Lambda already have their own request body limits so they're not affected — but the gap between GET and POST seems like an oversight regardless.

This change reads the body as text first, checks its length against the same MAX_PAYLOAD_SIZE constant, then parses. Nothing fancy, just making the two paths consistent.

Summary by CodeRabbit

  • Bug Fixes
    • Added payload size validation to server functions to prevent processing of excessively large requests, improving stability and resource efficiency.

@github-actions
Copy link
Copy Markdown
Contributor

Bundle Size Benchmarks

  • Commit: cbf9ecfc69f1
  • Measured at: 2026-04-20T13:01:58.890Z
  • Baseline source: history:cd91ceebb84b
  • Dashboard: bundle-size history
Scenario Current (gzip) Delta vs baseline Raw Brotli Trend
react-router.minimal 87.35 KiB 0 B (0.00%) 274.60 KiB 75.97 KiB ▁▁▁████████
react-router.full 90.63 KiB 0 B (0.00%) 285.74 KiB 78.87 KiB ▁▁▁▂▂██████
solid-router.minimal 35.55 KiB 0 B (0.00%) 106.71 KiB 31.96 KiB ▁▁▁▂▂▂▂▆███
solid-router.full 40.02 KiB 0 B (0.00%) 120.20 KiB 35.94 KiB ▁▁▁▂▂▂▂▇███
vue-router.minimal 53.30 KiB 0 B (0.00%) 152.01 KiB 47.88 KiB ▁▁▁████████
vue-router.full 58.20 KiB 0 B (0.00%) 167.43 KiB 52.06 KiB ▁▁▁████████
react-start.minimal 101.77 KiB 0 B (0.00%) 322.39 KiB 88.05 KiB ▁▁▁▃▃██████
react-start.full 105.21 KiB 0 B (0.00%) 332.72 KiB 90.89 KiB ▁▁▁▃▃██████
solid-start.minimal 49.53 KiB 0 B (0.00%) 152.52 KiB 43.68 KiB ▁▁▁▄▄▄▄█▇▇▇
solid-start.full 55.07 KiB 0 B (0.00%) 168.73 KiB 48.43 KiB ▁▁▁▂▂▂▂▅███

Trend sparkline is historical gzip bytes ending with this PR measurement; lower is better.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 20, 2026

📝 Walkthrough

Walkthrough

Modified JSON request body parsing in the server functions handler to enforce maximum payload size validation. The change replaces the built-in request.json() method with explicit text reading, size checking, and JSON parsing, adding protection against oversized requests.

Changes

Cohort / File(s) Summary
Payload Size Validation
packages/start-server-core/src/server-functions-handler.ts
Replaced await request.json() with await request.text(), added MAX_PAYLOAD_SIZE enforcement, and explicit error handling for oversized JSON payloads.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Poem

🐰 A payload comes hopping down the lane,
But oh! It's simply far too large to maintain,
With size checks in place, we stand so tall,
No giants of data shall breach our wall! 🛡️

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: enforcing payload size limits on POST server function requests, which directly matches the code change that replaces JSON parsing with text-based parsing and size validation.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/start-server-core/src/server-functions-handler.ts`:
- Around line 153-157: Replace the current buffering via await request.text()
followed by text.length check with a streaming byte-limit reader: add a helper
function (e.g., readTextWithLimit(request: Request, maxBytes: number):
Promise<string>) that reads request.body.getReader() in chunks, accumulates
Uint8Array chunks, tracks received byteLength, cancels and throws 'Payload too
large' when received > MAX_PAYLOAD_SIZE, and returns decoded text; then call
jsonText = await readTextWithLimit(request, MAX_PAYLOAD_SIZE) and use
JSON.parse(jsonText) where jsonPayload is set (replacing the existing await
request.text() + length check) so the limit is enforced on bytes during
streaming rather than on UTF-16 code units after buffering.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 43643a45-dc00-4a21-bafa-e9936e490221

📥 Commits

Reviewing files that changed from the base of the PR and between cbf9ecf and ab78cd8.

📒 Files selected for processing (1)
  • packages/start-server-core/src/server-functions-handler.ts

Comment on lines +153 to +157
const text = await request.text()
if (text.length > MAX_PAYLOAD_SIZE) {
throw new Error('Payload too large')
}
jsonPayload = JSON.parse(text)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

Does Fetch Request.text() read the entire body before resolving, and can Request.body be consumed incrementally as a ReadableStream<Uint8Array> via getReader()?

💡 Result:

Yes, Request.text reads the entire body before resolving. It implements the Body mixin, which consumes the full ReadableStream by extracting all bytes until end-of-stream, then decodes them to a UTF-8 string. The spec's consume body algorithm sets the disturbed flag and reads "from request's body until it returns end-of-stream" in parallel, resolving only after the complete body is processed. Yes, Request.body can be consumed incrementally as a ReadableStream via getReader. Request.body is a ReadableStream (or null for GET/HEAD), and getReader locks it to read Uint8Array chunks sequentially using reader.read in a loop until done=true. Calling text or getReader disturbs/locks the body, preventing reuse without cloning.

Citations:


🏁 Script executed:

# Check if the file exists and read lines 150-160 for context
if [ -f "packages/start-server-core/src/server-functions-handler.ts" ]; then
  echo "=== File exists. Reading lines 150-160 ==="
  sed -n '150,160p' "packages/start-server-core/src/server-functions-handler.ts" | cat -n
  echo ""
  echo "=== Full context: lines 140-170 ==="
  wc -l "packages/start-server-core/src/server-functions-handler.ts"
else
  echo "File not found at that path. Searching for similar files..."
  find . -name "server-functions-handler.ts" 2>/dev/null
fi

Repository: TanStack/router

Length of output: 634


🏁 Script executed:

# Search for MAX_PAYLOAD_SIZE definition and usage
rg "MAX_PAYLOAD_SIZE" packages/start-server-core/src/ -B 2 -A 2

Repository: TanStack/router

Length of output: 1536


🏁 Script executed:

# Search for readTextWithLimit or similar helper functions
rg "readTextWithLimit|getReader|Request\.body" packages/start-server-core/src/server-functions-handler.ts -B 2 -A 5

Repository: TanStack/router

Length of output: 41


Enforce the limit while streaming, not after buffering the entire body.

await request.text() reads the entire POST body into memory before the size check on line 154, so the large-body DoS this PR targets remains possible. Additionally, text.length counts UTF-16 code units rather than bytes, which does not accurately enforce the MAX_PAYLOAD_SIZE byte limit for non-ASCII JSON.

Use streaming to check the byte limit before buffering:

Suggested implementation
-          const text = await request.text()
-          if (text.length > MAX_PAYLOAD_SIZE) {
-            throw new Error('Payload too large')
-          }
+          const text = await readTextWithLimit(request, MAX_PAYLOAD_SIZE)
           jsonPayload = JSON.parse(text)

Add this helper with explicit type annotations:

async function readTextWithLimit(
  request: Request,
  maxBytes: number,
): Promise<string> {
  const reader = request.body?.getReader()
  if (!reader) {
    return ''
  }

  const chunks: Array<Uint8Array> = []
  let received = 0

  try {
    while (true) {
      const { done, value } = await reader.read()
      if (done) break
      if (!value) continue

      received += value.byteLength
      if (received > maxBytes) {
        await reader.cancel()
        throw new Error('Payload too large')
      }

      chunks.push(value)
    }
  } finally {
    reader.releaseLock()
  }

  const body = new Uint8Array(received)
  let offset = 0
  for (const chunk of chunks) {
    body.set(chunk, offset)
    offset += chunk.byteLength
  }

  return new TextDecoder().decode(body)
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/start-server-core/src/server-functions-handler.ts` around lines 153
- 157, Replace the current buffering via await request.text() followed by
text.length check with a streaming byte-limit reader: add a helper function
(e.g., readTextWithLimit(request: Request, maxBytes: number): Promise<string>)
that reads request.body.getReader() in chunks, accumulates Uint8Array chunks,
tracks received byteLength, cancels and throws 'Payload too large' when received
> MAX_PAYLOAD_SIZE, and returns decoded text; then call jsonText = await
readTextWithLimit(request, MAX_PAYLOAD_SIZE) and use JSON.parse(jsonText) where
jsonPayload is set (replacing the existing await request.text() + length check)
so the limit is enforced on bytes during streaming rather than on UTF-16 code
units after buffering.

@instantraaamen
Copy link
Copy Markdown
Author

I'm debating between a minimal Content-Length pre-check (covers all browser fetch() calls, ~2 lines) or the full streaming approach.
Feedback welcome from maintainers.

@schiller-manuel
Copy link
Copy Markdown
Contributor

what is state of the art in other frameworks here?

@instantraaamen
Copy link
Copy Markdown
Author

I dug into the source code and ran identical tests across 7 frameworks. Here's what I found:

Test setup

All frameworks tested on Node.js v23.11.0 with the same payloads:

  • Test A — GET 5KB query string (valid JSON, URL-encoded)
  • Test B — POST 200KB JSON
  • Test C — POST 2MB JSON
  • Test D — POST 2MB JSON (verbose, to observe streaming behavior)

GET query string limit

Framework GET 5KB (Test A) Notes
Express 200 No query string size check. Node.js --max-http-header-size (16KB) is the ceiling.
Hono 200 Same — no framework-level GET limit
SvelteKit 200 BODY_SIZE_LIMIT applies to request bodies only, not query strings
Next.js 200 Server Actions are POST-only by design
H3 200 No limit
React Router 200 No limit
TanStack Start 200 MAX_PAYLOAD_SIZE (1MB) check exists on GET — unique among these frameworks

POST body size limit

Framework POST 200KB (Test B) POST 2MB (Test C) Streaming abort? (Test D)
Express 5.2.1 (json({ limit: '1mb' })) 200 413 Payload Too Large No (full upload, rejected after)
Hono 4.7.10 (bodyLimit({ maxSize: 100KB })) 413 Payload Too Large 413 Payload Too Large Yes — aborted at 1.2MB
SvelteKit 2.21.4 / adapter-node 5.2.12 (BODY_SIZE_LIMIT=512K) 200 500 Internal Server Error Yes — aborted at 745KB
Next.js 16.2.4 (bodySizeLimit: '1 MB') 200 500 Internal Server Error No (full upload, rejected after)
H3 2.1.1 (Nuxt/Nitro stack) 200 200 — accepted No limit
React Router 7.14.0 200 200 — accepted No limit
TanStack Start 1.167.42 200 200 — accepted No limit (this PR fixes it)

Source code references

Framework How POST body is handled Source
Express raw-body — streaming + Content-Length pre-check read.js#L39-L173
Hono ReadableStream wrapper — Content-Length pre-check + streaming byte count body-limit/index.ts#L68-L125
SvelteKit Streaming req.on('data')chunk.length count + SvelteKitError(413) node/index.js#L38-L98
Next.js Streaming Transform — byte count per chunk, aborts via pipeline() action-handler.ts#L898-L917
H3 event.req.text() — no size check body.ts#L23-L47
React Router request.body passed through — no size check single-fetch.ts#L37-L89
TanStack Start request.json() — no size check server-functions-handler.ts#L151-L154

How to read the attached logs

Raw curl output for each framework — search for:

What you're looking for Search string
GET test result Test A
POST 200KB result Test B
POST 2MB result Test C
Streaming abort evidence Test D

Possible approaches

# Approach Pros Cons
A Apply MAX_PAYLOAD_SIZE (1MB) to POST body (current PR) Simple, consistent with GET path await request.text() buffers the full body before checking — a multi-GB payload still hits memory
B Check Content-Length header before reading the body (GET + POST) Rejects oversized requests instantly, zero memory cost Chunked transfer-encoding has no Content-Length — but browsers always set it for fetch()
C Streaming body reader with byte counting (like Next.js / SvelteKit) True memory protection even against chunked requests More code, more complexity

My take

Option B (Content-Length pre-check) is probably the sweet spot. Browser fetch() calls to server functions always include Content-Length, so it covers the realistic attack surface with minimal code. A streaming approach (Option C) would be a nice hardening step down the road, but even Next.js still has // TODO: add body limit on their Edge runtime.

Happy to update the PR to whichever approach you'd prefer.

Attachment

express.log
h3.log
hono.log
nextjs.log
react-router-v7.log
sveltekit.log
tanstack-start.log

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants