When calling ch_to_digit() with a UTF-16 or UTF-32 code unit, it simply
truncates away any data stored in the non-low byte(s) of the code unit.
It then uses a lookup table to determine whether the low byte
corresponds to an ASCII digit. This is incorrect because as soon as any
bit outside the low byte is set, the number will never correspond to a
ASCII digit anymore.
To fix this, we produce a mask that is all zeroes if any bit outside the
low byte is set in the code unit, all ones otherwise. Anding this mask
with the original code unit forces the table lookup to return the
sentinel value from the zero-index if any high bit was set and causes
the code unit not to be parsed as integer.
This bug was discovered when loading Mastodon posts inside the Ladybird
browser where some of Mastodon's JavaScript would trigger the code path
that erroneously parsed the emoji as integer. It had the visible effect
that some digits inside the posts would get rendered as one of the
emojis that parsed to that digit. For more details see this issue:
https://github.com/LadybirdBrowser/ladybird/issues/6205
The emojis in the test case are simply all the emojis used on Mastodon
that caused the bug. They can be found here:
06803422da/app/javascript/mastodon/features/emoji/emoji_map.json
* fix for some types errors.
* fix small amount of not optimized code.
* add more comments to the code.
* unified of function binary_format::max_mantissa_fast_path() because it's do the same.
exponent value is always less than in16_t.
original main:
Tests:
time is: 44278ms.
size of my tests 389.0k
size of my program 164.0k
my main:
Tests:
time is: 42015ms.
size of my tests 389.0k
size of my program 164.0k
my main with FASTFLOAT_ONLY_POSITIVE_C_NUMBER_WO_INF_NAN
Tests:
time is: 41282ms.
size of my tests 386.5k
size of my program 161.5k
After this I'll try it on my partner Linux machine with the original tests and compare much better.