Skip to content

Harden S2K Parser #2283

Open
subbudvk wants to merge 4 commits intobcgit:mainfrom
subbudvk:subbudvk-patch-2
Open

Harden S2K Parser #2283
subbudvk wants to merge 4 commits intobcgit:mainfrom
subbudvk:subbudvk-patch-2

Conversation

@subbudvk
Copy link
Copy Markdown

@subbudvk subbudvk commented Apr 20, 2026

The S2K wire parser reads memorySizeExponent as a raw byte (0–255) with no bounds check. A 24-byte crafted PGP SKESK v4 packet with memExp=22 causes OOM.

This PR adds a configurable cap on memorySizeExponent at wire-parse time:

-Dorg.bouncycastle.openpgp.argon2.max_memory_exp=N

JUnit test added with the PR: one method reproduces the OOM directly and a second reads the same crafted packet through the fixed parser and asserts IOException is thrown with no heap allocation.

@dghgit dghgit self-assigned this Apr 22, 2026
@dghgit
Copy link
Copy Markdown
Contributor

dghgit commented Apr 23, 2026

Thanks for the PR. This is actually a bit weirder than it looks, BC actually has a hard coded max of 30 as it ultimately uses an int to represent the memory as bytes, which I can actually do, even in my IDE.

That said, I think the idea of being able to throttle it is a good one, but the setting needs to apply to generation as well as parsing, and we should probably keep the max at 30 as the default as I'm aware some people push values like this to the max for personal reasons, so a value of 21 is likely to see 1.85 unexpectedly break things.

I've just pushed some updates related to the issue which adds bounds checking in both core and pg which I think cover the intention of this PR as well as generation. Let me know what you think.

@subbudvk
Copy link
Copy Markdown
Author

Thanks for the PR. This is actually a bit weirder than it looks, BC actually has a hard coded max of 30 as it ultimately uses an int to represent the memory as bytes, which I can actually do, even in my IDE.

That said, I think the idea of being able to throttle it is a good one, but the setting needs to apply to generation as well as parsing, and we should probably keep the max at 30 as the default as I'm aware some people push values like this to the max for personal reasons, so a value of 21 is likely to see 1.85 unexpectedly break things.

I've just pushed some updates related to the issue which adds bounds checking in both core and pg which I think cover the intention of this PR as well as generation. Let me know what you think.

@dghgit Thanks for the quick turnaround and for extending the fix to the generation path. The OOM was confirmed practically and I chose the limit as 21. Happy to align with 30 given the int ceiling and compatibility concerns you raised. Changes looks good to me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants