295 Commits

Author SHA1 Message Date
Daniel Lemire
af4e24a30f Bump version. v3.1.0 2021-09-14 21:32:14 -04:00
Daniel Lemire
b334317dd2 Minor fixes 2021-09-14 21:31:34 -04:00
Daniel Lemire
1b9150913e Updating. v3.0.0 2021-09-13 22:04:26 -04:00
Daniel Lemire
5c85d38eda
Merge pull request #104 from fastfloat/dlemire/bigint
Adopting Alexhuszagh's decimal comparison approach for long input strings
2021-09-13 22:03:37 -04:00
Daniel Lemire
3f0ba09a95
Merge pull request #96 from Alexhuszagh/bigint
Implement the big-integer arithmetic algorithm.
2021-09-13 21:23:14 -04:00
Alex Huszagh
fc0c8680a5 Implement the big-integer arithmetic algorithm.
Replaces the existing decimal implementation, for substantial
performance improvements with near-halfway cases. This is especially
fast with a large number of digits.

**Big Integer Implementation**

A small subset of big-integer arithmetic has been added, with the
`bigint` struct. It uses a stack-allocated vector with enough bits to
store the float with the large number of significant digits. This is
log2(10^(769 + 342)), to account for the largest possible magnitude
exponent, and number of digits (3600 bits), and then rounded up to 4k bits.

The limb size is determined by the architecture: most 64-bit
architectures have efficient 128-bit multiplication, either by a single
hardware instruction or 2 native multiplications for the high and low
bits. This includes x86_64, mips64, s390x, aarch64, powerpc64, riscv64,
and the only known exception is sparcv8 and sparcv9. Therefore, we
define a limb size of 64-bits on 64-bit architectures except SPARC,
otherwise we fallback to 32-bit limbs.

A simple stackvector is used, which just has operations to add elements,
index, and truncate the vector.

`bigint` is then just a wrapper around this, with methods for
big-integer arithmetic. For our algorithms, we just need multiplication
by a power (x * b^N), multiplication by a bigint or scalar value, and
addition by a bigint or scalar value. Scalar addition and multiplication
uses compiler extensions when possible (__builtin_add_overflow and
__uint128_t), if not, then we implement simple logic shown to optimize
well on MSVC. Big-integer multiplication is done via grade school
multiplication, which is more efficient than any asymptotically faster
algorithms. Multiplication by a power is then done via bitshifts for
powers-of-two, and by iterative multiplications of a large and then
scalar value for powers-of-5.

**compute_float**

Compute float has been slightly modified so if the algorithm cannot
round correctly, it returns a normalized, extended-precision adjusted
mantissa with the power2 shifted by INT16_MIN so the exponent is always
negative. `compute_error` and `compute_error_scaled` have been added.

**Digit Optimiations**

To improve performance for numbers with many digits,
`parse_eight_digits_unrolled` is used for both integers and fractions,
and uses a while loop than two nested if statements. This adds no
noticeable performance cost for common floats, but dramatically improves
performance for numbers with large digits (without these optimizations,
~65% of the total runtime cost is in parse_number_string).

**Parsed Number**

Two fields have been added to `parsed_number_string`, which contains a
slice of the integer and fraction digits. This is extremely cheap, since
the work is already done, and the strings are pre-tokenized during
parsing. This allows us on overflow to re-parse these tokenized strings,
without checking if each character is an integer. Likewise, for the
big-integer algorithms, we can merely re-parse the pre-tokenized
strings.

**Slow Algorithm**

The new algorithm is `digit_comp`, which takes the parsed number string
and the `adjusted_mantissa` from `compute_float`. The significant digits
are parsed into a big integer, and the exponent relative to the
significant digits is calculated. If the exponent is >= 0, we use
`positive_digit_comp`, otherwise, we use `negative_digit_comp`.

`positive_digit_comp` is quite simple: we scale the significant digits
to the exponent, and then we get the high 64-bits for the native float,
determine if any lower bits were truncated, and use that to direct
rounding.

`negative_digit_comp` is a little more complex, but also quite trivial:
we use the parsed significant digits as the real digits, and calculate
the theoretical digits from `b+h`, the halfway point between `b` and
`b+u`, the next-positive float. To get `b`, we round the adjusted
mantissa down, create an extended-precision representation, and
calculate the halfway point. We now have a base-10 exponent for the real
digits, and a base-2 exponent for the theoretical digits. We scale these
two to the same exponent by multiplying the theoretixal digits by
`5**-real_exp`. We then get the base-2 exponent as `theor_exp -
real_exp`, and if this is positive, we multipy the theoretical digits by
it, otherwise, we multiply the real digits by it. Now, both are scaled
to the same magnitude, and we simply compare the digits in the big
integer, and use that to direct rounding.

**Rust-Isms**

A few Rust-isms have been added, since it simplifies logic assertions.
These can be trivially removed or reworked, as needed.

- a `slice` type has been added, which is a pointer and length.
- `FASTFLOAT_ASSERT`, `FASTFLOAT_DEBUG_ASSERT`, and `FASTFLOAT_TRY` have
  been added
  - `FASTFLOAT_ASSERT` aborts, even in release builds, if the condition
    fails.
  - `FASTFLOAT_DEBUG_ASSERT` defaults to `assert`, for logic errors.
  - `FASTFLOAT_TRY` is like a Rust `Option` type, which propagates
    errors.

Specifically, `FASTFLOAT_TRY` is useful in combination with
`FASTFLOAT_ASSERT` to ensure there are no memory corruption errors
possible in the big-integer arithmetic. Although the `bigint` type
ensures we have enough storage for all valid floats, memory issues are
quite a severe class of vulnerabilities, and due to the low performance
cost of checks, we abort if we would have out-of-bounds writes. This can
only occur when we are adding items to the vector, which is a very small
number of steps. Therefore, we abort if our memory safety guarantees
ever fail. lexical has never aborted, so it's unlikely we will ever fail
these guarantees.
2021-09-10 18:53:53 -05:00
Daniel Lemire
25b240a02d
Merge pull request #102 from jrahlf/reduce_includes
Remove unneeded includes
2021-09-05 19:04:31 -04:00
Jonas Rahlf
162a37b25a remove cstdio includes, remove cassert include, add asthetic newlines 2021-09-05 23:13:41 +02:00
Daniel Lemire
8c4405e76e
Merge pull request #101 from fastfloat/dlemire/const
C++ 20 support and tests
2021-09-03 18:54:52 -04:00
Daniel Lemire
f74d505615 Adding C++20 tests. 2021-09-03 17:57:44 -04:00
Daniel Lemire
1a56fe5f64
Merge pull request #100 from jrahlf/constexpr
constexpr for c++20 compliant compilers
2021-09-03 16:36:49 -04:00
Jonas Rahlf
4e13ec151b check for HAS_CXX20_CONSTEXPR before attempting to do c++20 stuff 2021-09-02 23:20:28 +02:00
Jonas Rahlf
e5d5e576a6 use #if defined __has_include properly 2021-09-02 22:22:03 +02:00
Jonas Rahlf
b17eafd06f chnage compiler check for bit_cast so it compiles with older compilers 2021-09-02 22:00:57 +02:00
Jonas Rahlf
d8ee88e7f6 initial version with working constexpr for c++20 compliant compilers 2021-09-01 00:52:25 +02:00
Daniel Lemire
898f54f30a
Merge pull request #95 from Alexhuszagh/ptr
Fixes #94, with unspecified behavior in pointer comparisons.
2021-08-21 14:27:35 -04:00
Alex Huszagh
3e74ed313a Fixes #94, with unspecified behavior in pointer comparisons. 2021-08-21 13:07:57 -05:00
Daniel Lemire
fe1ce58053
Merge pull request #92 from fastfloat/dlemire/v2.0.0_candidate
Candidate release.
v2.0.0
2021-08-03 09:27:28 -04:00
Daniel Lemire
f70b645436 Candidate release. 2021-08-03 09:22:40 -04:00
Daniel Lemire
bb140f0a87
Merge pull request #91 from pitrou/issue90-decimal-point
Issue #90: accept custom decimal point
2021-08-03 08:59:52 -04:00
Antoine Pitrou
3881ea6937 Issue #90: accept custom decimal point 2021-08-03 10:44:24 +02:00
Daniel Lemire
3bd0c01c6c
Update README.md 2021-07-16 09:38:05 -04:00
Daniel Lemire
33f3d90397
Update README.md 2021-07-02 14:20:53 -04:00
Daniel Lemire
50b9b7c211
Update README.md 2021-06-23 17:57:44 -04:00
Daniel Lemire
21efa92c91
Merge pull request #87 from xelatihy/amalgamate
Single include script
2021-06-23 17:47:03 -04:00
Fabio Pellacini
bd76291dd0 updated test 2021-06-23 23:33:58 +02:00
Fabio Pellacini
cdae8a0357 test for amalgamation 2021-06-23 23:32:10 +02:00
Fabio Pellacini
f900953621 updated readme 2021-06-23 23:23:39 +02:00
Fabio Pellacini
aea0ea7968 updated readme 2021-06-23 07:27:17 +02:00
Fabio Pellacini
f861b35c82 Added analgamation script 2021-06-23 07:24:28 +02:00
Daniel Lemire
8159e8bcf6
Merge pull request #84 from musicinmybrain/system-doctest-option
Add a SYSTEM_DOCTEST CMake option
v1.1.2
2021-06-21 12:39:16 -04:00
Benjamin A. Beasley
fe8e477e14 Add a SYSTEM_DOCTEST CMake option
This option is off by default, maintaining the previous behavior. When
enabled (along with FASTFLOAT_TEST), it bypasses the FetchContent
machinery for doctest so that a system-wide installation of the doctest
header can be easily used. In this case, the header doctest/doctest.h
should be available on the compiler’s include path.

This option is especially useful for Linux distributions and others that
need to run the tests in fully offline build environments.

Fixes #83.
2021-06-21 11:24:46 -04:00
Daniel Lemire
bfda5881ab
Merge pull request #82 from fastfloat/dlemire/adding_m_arm
Adding m_arm detection.
2021-06-07 10:43:22 -04:00
Daniel Lemire
94c78adb2e Typo 2021-06-07 10:34:44 -04:00
Daniel Lemire
93a2c79cf2 Adding m_arm detection. 2021-06-07 10:27:52 -04:00
Daniel Lemire
a7fbcb0a45
Merge pull request #81 from fastfloat/dlemire/windows_arm
Adding a build test for Windows ARM.
v1.1.1
2021-06-07 10:06:03 -04:00
Daniel Lemire
2504268bbf Being more narrow. 2021-06-07 09:59:44 -04:00
Daniel Lemire
a721b344b4 Trying. 2021-06-07 09:43:36 -04:00
Daniel Lemire
1457b5f15a Workaround for doctest. 2021-06-07 09:38:15 -04:00
Daniel Lemire
6921c8f264 Upgrading doctest. 2021-06-07 09:32:39 -04:00
Daniel Lemire
f54b41c09e Tweak for 32-bit Windows 2021-06-07 09:14:09 -04:00
Daniel Lemire
496fd4cf49 Trying both ARM and ARM64 2021-06-07 09:08:01 -04:00
Daniel Lemire
87c16bb093 Adding a build test for Windows ARM. 2021-06-07 08:59:23 -04:00
Daniel Lemire
e3af106668
Merge pull request #79 from fastfloat/dlemire/vs_studio_persmissive_minus
adding to recent Visual Studio builds a permissive- flag
v1.1.0
2021-06-01 10:08:10 -04:00
Daniel Lemire
9519835573 Cleaner flag setting. 2021-06-01 10:03:05 -04:00
Daniel Lemire
0ece926e6d Fixing --verbose. 2021-06-01 09:49:41 -04:00
Daniel Lemire
06e61729c9 making constexpr as inline. 2021-06-01 09:46:43 -04:00
Daniel Lemire
799f24ba07 Making vs builds verbose. 2021-06-01 09:36:38 -04:00
Daniel Lemire
862082c468 Adding permissive- flag to VS builds. 2021-06-01 09:35:25 -04:00
Daniel Lemire
2f0c95fe5b
Update README.md 2021-05-31 18:28:59 -04:00