Skip to content

[LANG-1772] Restrict size of cache to prevent overflow errors #1379

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

jcwinters
Copy link

Added a length restriction to RandomStringutils, limiting the cache to 60M entries. Because of rejections the bitIndex in the underling cache can overflow when right shifting. Also added a test to verify the fix.

This test takes quite a while to run, so if necessary I can create a profile for slow tests to exclude the test from the normal build.

@ppkarwasz
Copy link

Added a length restriction to RandomStringutils, limiting the cache to 60M entries. Because of rejections the bitIndex in the underling cache can overflow when right shifting. Also added a test to verify the fix.

The problem is caused by an integer overflow of bitIndex in:

result |= cache[bitIndex >> 3] >> (bitIndex & 0x7) & (1 << generatedBitsInIteration) - 1;

A simpler solution would be to:

@garydgregory garydgregory changed the title LANG-1772 restrict size of cache to prevent overflow errors [LANG-1772] Restrict size of cache to prevent overflow errors May 8, 2025
@garydgregory
Copy link
Member

The test as is blows up GitHub builds so let's use something like @EnabledIfSystemProperty(named = "test.large.heap", matches = "true")

@jcwinters
Copy link
Author

A simpler solution would be to:

  • Change the type of bitIndex to long, so it can count up to 8 * Integer.MAX_VALUE
  • Refactor the expression that computes the cacheSize argument in a way that it does not overflow:

I considered that, changing bitindex to long adds some casting complexity I didn't want to deal with as arrays are int indexed. And I didn't particularly want to change the randomization algorithm, as to get rid of overflow possibilities I'd have to change to something that doesn't have rejections, and it made my head hurt.

@garydgregory
Copy link
Member

Hi @ppkarwasz

You've proposed an alternative solution. Would you shows in a PR?

@ppkarwasz
Copy link

You've proposed an alternative solution. Would you shows in a PR?

I'll submit a PR by the end of the week.

…nside the CachedRandomBits constructor - also checking if the padding produces overflow. No longer using an arbitrary value but being more precise.
Copy link
Member

@garydgregory garydgregory left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jcwinters
Thank you for your update.
I think the test should be more of a white box test and test just below and above the overflow. WDYT? @ppkarwasz ?

Copy link

@ppkarwasz ppkarwasz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks OK to me.

I don’t think we need to preemptively generate more than 256 MiB of random data. The goal of generating data in bulk is to take advantage of the fact that random number generators are typically more efficient when producing large chunks of data rather than individual bytes. However, I suspect the optimal amount is significantly less than 256 MiB—we should run some benchmarks to determine the best value.

@ppkarwasz ppkarwasz requested a review from garydgregory May 13, 2025 14:01
@garydgregory
Copy link
Member

@jcwinters
Please 'mvn' by itself before you push to catch all build errors.

@jcwinters
Copy link
Author

@jcwinters Please 'mvn' by itself before you push to catch all build errors.
@garydgregory Sorry about that, I'm so used to my workflow with pre_commit hooks running everything, I didn't even look. Thanks for the patience with the newbie, and I promise to do better 😄

* The maximum size of the cache.
*
* <p>
* This is dictated by the {@code if (bitIndex >> 3 >= cache.length)} in the {@link #nextBits(int)} method.
Copy link
Member

@garydgregory garydgregory May 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the 3 in this expression MUST match the 3 in the expression building cacheSize in the random(...) method, then it should be refactored from a magic number in to a constant IMO.

This makes me wonder about the other magic numbers 5 and 10 which beg for documentation if only to help with maintenance.

WDYT?

* <p>
* This is dictated by the {@code if (bitIndex >> 3 >= cache.length)} in the {@link #nextBits(int)} method.
*/
private static final int MAX_CACHE_SIZE = 0x7FFF_FFFF >> 3;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, this looks better than the 0x1FFF_FFFF I proposed! 💯

Nit: following in the same direction you proposed, maybe Integer.MAX_VALUE >> 3 would be even easier to understand?

final CachedRandomBits arb = new CachedRandomBits((count * gapBits + 3) / 5 + 10, random);
// For huge strings the padding required can cause an overflow
// 63_913_201 is the highest x such that (21x + 3) / 5 + 10 ? 0x0FFF_FFFF.
final int cacheSize = (count * gapBits + 3) > 0 ? (count * gapBits + 3) / 5 + 10 : 63_913_201;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

63_913_201 is the maximum allowed value for count, not cacheSize.

  • If count is 63_913_201, then cacheSize is close to MAX_CACHE_SIZE.
  • By increasing count further, the value of the expression (count * gapBits + 3) / 5 + 10 increases until it almost reaches Integer.MAX_VALUE / 5 + 10.

If we want cacheSize to be a non-decreasing function of count, we should use Integer.MAX_VALUE / 5 + 10 instead of 63_913_201`.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants