Replaces the existing decimal implementation, for substantial
performance improvements with near-halfway cases. This is especially
fast with a large number of digits.
**Big Integer Implementation**
A small subset of big-integer arithmetic has been added, with the
`bigint` struct. It uses a stack-allocated vector with enough bits to
store the float with the large number of significant digits. This is
log2(10^(769 + 342)), to account for the largest possible magnitude
exponent, and number of digits (3600 bits), and then rounded up to 4k bits.
The limb size is determined by the architecture: most 64-bit
architectures have efficient 128-bit multiplication, either by a single
hardware instruction or 2 native multiplications for the high and low
bits. This includes x86_64, mips64, s390x, aarch64, powerpc64, riscv64,
and the only known exception is sparcv8 and sparcv9. Therefore, we
define a limb size of 64-bits on 64-bit architectures except SPARC,
otherwise we fallback to 32-bit limbs.
A simple stackvector is used, which just has operations to add elements,
index, and truncate the vector.
`bigint` is then just a wrapper around this, with methods for
big-integer arithmetic. For our algorithms, we just need multiplication
by a power (x * b^N), multiplication by a bigint or scalar value, and
addition by a bigint or scalar value. Scalar addition and multiplication
uses compiler extensions when possible (__builtin_add_overflow and
__uint128_t), if not, then we implement simple logic shown to optimize
well on MSVC. Big-integer multiplication is done via grade school
multiplication, which is more efficient than any asymptotically faster
algorithms. Multiplication by a power is then done via bitshifts for
powers-of-two, and by iterative multiplications of a large and then
scalar value for powers-of-5.
**compute_float**
Compute float has been slightly modified so if the algorithm cannot
round correctly, it returns a normalized, extended-precision adjusted
mantissa with the power2 shifted by INT16_MIN so the exponent is always
negative. `compute_error` and `compute_error_scaled` have been added.
**Digit Optimiations**
To improve performance for numbers with many digits,
`parse_eight_digits_unrolled` is used for both integers and fractions,
and uses a while loop than two nested if statements. This adds no
noticeable performance cost for common floats, but dramatically improves
performance for numbers with large digits (without these optimizations,
~65% of the total runtime cost is in parse_number_string).
**Parsed Number**
Two fields have been added to `parsed_number_string`, which contains a
slice of the integer and fraction digits. This is extremely cheap, since
the work is already done, and the strings are pre-tokenized during
parsing. This allows us on overflow to re-parse these tokenized strings,
without checking if each character is an integer. Likewise, for the
big-integer algorithms, we can merely re-parse the pre-tokenized
strings.
**Slow Algorithm**
The new algorithm is `digit_comp`, which takes the parsed number string
and the `adjusted_mantissa` from `compute_float`. The significant digits
are parsed into a big integer, and the exponent relative to the
significant digits is calculated. If the exponent is >= 0, we use
`positive_digit_comp`, otherwise, we use `negative_digit_comp`.
`positive_digit_comp` is quite simple: we scale the significant digits
to the exponent, and then we get the high 64-bits for the native float,
determine if any lower bits were truncated, and use that to direct
rounding.
`negative_digit_comp` is a little more complex, but also quite trivial:
we use the parsed significant digits as the real digits, and calculate
the theoretical digits from `b+h`, the halfway point between `b` and
`b+u`, the next-positive float. To get `b`, we round the adjusted
mantissa down, create an extended-precision representation, and
calculate the halfway point. We now have a base-10 exponent for the real
digits, and a base-2 exponent for the theoretical digits. We scale these
two to the same exponent by multiplying the theoretixal digits by
`5**-real_exp`. We then get the base-2 exponent as `theor_exp -
real_exp`, and if this is positive, we multipy the theoretical digits by
it, otherwise, we multiply the real digits by it. Now, both are scaled
to the same magnitude, and we simply compare the digits in the big
integer, and use that to direct rounding.
**Rust-Isms**
A few Rust-isms have been added, since it simplifies logic assertions.
These can be trivially removed or reworked, as needed.
- a `slice` type has been added, which is a pointer and length.
- `FASTFLOAT_ASSERT`, `FASTFLOAT_DEBUG_ASSERT`, and `FASTFLOAT_TRY` have
been added
- `FASTFLOAT_ASSERT` aborts, even in release builds, if the condition
fails.
- `FASTFLOAT_DEBUG_ASSERT` defaults to `assert`, for logic errors.
- `FASTFLOAT_TRY` is like a Rust `Option` type, which propagates
errors.
Specifically, `FASTFLOAT_TRY` is useful in combination with
`FASTFLOAT_ASSERT` to ensure there are no memory corruption errors
possible in the big-integer arithmetic. Although the `bigint` type
ensures we have enough storage for all valid floats, memory issues are
quite a severe class of vulnerabilities, and due to the low performance
cost of checks, we abort if we would have out-of-bounds writes. This can
only occur when we are adding items to the vector, which is a very small
number of steps. Therefore, we abort if our memory safety guarantees
ever fail. lexical has never aborted, so it's unlikely we will ever fail
these guarantees.
|
||
|---|---|---|
| .github/workflows | ||
| ci | ||
| cmake | ||
| include/fast_float | ||
| script | ||
| tests | ||
| .cirrus.yml | ||
| .gitignore | ||
| .travis.yml | ||
| AUTHORS | ||
| CMakeLists.txt | ||
| CONTRIBUTORS | ||
| LICENSE-APACHE | ||
| LICENSE-MIT | ||
| README.md | ||
fast_float number parsing library: 4x faster than strtod
The fast_float library provides fast header-only implementations for the C++ from_chars
functions for float and double types. These functions convert ASCII strings representing
decimal values (e.g., 1.3e10) into binary types. We provide exact rounding (including
round to even). In our experience, these fast_float functions many times faster than comparable number-parsing functions from existing C++ standard libraries.
Specifically, fast_float provides the following two functions with a C++17-like syntax (the library itself only requires C++11):
from_chars_result from_chars(const char* first, const char* last, float& value, ...);
from_chars_result from_chars(const char* first, const char* last, double& value, ...);
The return type (from_chars_result) is defined as the struct:
struct from_chars_result {
const char* ptr;
std::errc ec;
};
It parses the character sequence [first,last) for a number. It parses floating-point numbers expecting
a locale-independent format equivalent to what is used by std::strtod in the default ("C") locale.
The resulting floating-point value is the closest floating-point values (using either float or double),
using the "round to even" convention for values that would otherwise fall right in-between two values.
That is, we provide exact parsing according to the IEEE standard.
Given a successful parse, the pointer (ptr) in the returned value is set to point right after the
parsed number, and the value referenced is set to the parsed value. In case of error, the returned
ec contains a representative error, otherwise the default (std::errc()) value is stored.
The implementation does not throw and does not allocate memory (e.g., with new or malloc).
It will parse infinity and nan values.
Example:
#include "fast_float/fast_float.h"
#include <iostream>
int main() {
const std::string input = "3.1416 xyz ";
double result;
auto answer = fast_float::from_chars(input.data(), input.data()+input.size(), result);
if(answer.ec != std::errc()) { std::cerr << "parsing failure\n"; return EXIT_FAILURE; }
std::cout << "parsed the number " << result << std::endl;
return EXIT_SUCCESS;
}
Like the C++17 standard, the fast_float::from_chars functions take an optional last argument of
the type fast_float::chars_format. It is a bitset value: we check whether
fmt & fast_float::chars_format::fixed and fmt & fast_float::chars_format::scientific are set
to determine whether we allow the fixed point and scientific notation respectively.
The default is fast_float::chars_format::general which allows both fixed and scientific.
The library seeks to follow the C++17 (see 20.19.3.(7.1)) specification. In particular, it forbids leading spaces and the leading '+' sign.
We support Visual Studio, macOS, Linux, freeBSD. We support big and little endian. We support 32-bit and 64-bit systems.
Using commas as decimal separator
The C++ standard stipulate that from_chars has to be locale-independent. In
particular, the decimal separator has to be the period (.). However,
some users still want to use the fast_float library with in a locale-dependent
manner. Using a separate function called from_chars_advanced, we allow the users
to pass a parse_options instance which contains a custom decimal separator (e.g.,
the comma). You may use it as follows.
#include "fast_float/fast_float.h"
#include <iostream>
int main() {
const std::string input = "3,1416 xyz ";
double result;
fast_float::parse_options options{fast_float::chars_format::general, ','};
auto answer = fast_float::from_chars_advanced(input.data(), input.data()+input.size(), result, options);
if((answer.ec != std::errc()) || ((result != 3.1416))) { std::cerr << "parsing failure\n"; return EXIT_FAILURE; }
std::cout << "parsed the number " << result << std::endl;
return EXIT_SUCCESS;
}
Reference
- Daniel Lemire, Number Parsing at a Gigabyte per Second, Software: Pratice and Experience 51 (8), 2021.
Other programming languages
- There is an R binding called
rcppfastfloat. - There is a Rust port of the fast_float library called
fast-float-rust. - There is a Java port of the fast_float library called
FastDoubleParser. - There is a C# port of the fast_float library called
csFastFloat.
Relation With Other Work
The fast_float library provides a performance similar to that of the fast_double_parser library but using an updated algorithm reworked from the ground up, and while offering an API more in line with the expectations of C++ programmers. The fast_double_parser library is part of the Microsoft LightGBM machine-learning framework.
Users
The fast_float library is used by Apache Arrow where it multiplied the number parsing speed by two or three times. It is also used by Yandex ClickHouse and by Google Jsonnet.
How fast is it?
It can parse random floating-point numbers at a speed of 1 GB/s on some systems. We find that it is often twice as fast as the best available competitor, and many times faster than many standard-library implementations.
$ ./build/benchmarks/benchmark
# parsing random integers in the range [0,1)
volume = 2.09808 MB
netlib : 271.18 MB/s (+/- 1.2 %) 12.93 Mfloat/s
doubleconversion : 225.35 MB/s (+/- 1.2 %) 10.74 Mfloat/s
strtod : 190.94 MB/s (+/- 1.6 %) 9.10 Mfloat/s
abseil : 430.45 MB/s (+/- 2.2 %) 20.52 Mfloat/s
fastfloat : 1042.38 MB/s (+/- 9.9 %) 49.68 Mfloat/s
See https://github.com/lemire/simple_fastfloat_benchmark for our benchmarking code.
Video
Using as a CMake dependency
This library is header-only by design. The CMake file provides the fast_float target
which is merely a pointer to the include directory.
If you drop the fast_float repository in your CMake project, you should be able to use
it in this manner:
add_subdirectory(fast_float)
target_link_libraries(myprogram PUBLIC fast_float)
Or you may want to retrieve the dependency automatically if you have a sufficiently recent version of CMake (3.11 or better at least):
FetchContent_Declare(
fast_float
GIT_REPOSITORY https://github.com/lemire/fast_float.git
GIT_TAG tags/v1.1.2
GIT_SHALLOW TRUE)
FetchContent_MakeAvailable(fast_float)
target_link_libraries(myprogram PUBLIC fast_float)
You should change the GIT_TAG line so that you recover the version you wish to use.
Using as single header
The script script/amalgamate.py may be used to generate a single header
version of the library if so desired.
Just run the script from the root directory of this repository.
You can customize the license type and output file if desired as described in
the command line help.
You may directly download automatically generated single-header files:
https://github.com/fastfloat/fast_float/releases/download/v1.1.2/fast_float.h
Credit
Though this work is inspired by many different people, this work benefited especially from exchanges with Michael Eisel, who motivated the original research with his key insights, and with Nigel Tao who provided invaluable feedback. Rémy Oudompheng first implemented a fast path we use in the case of long digits.
The library includes code adapted from Google Wuffs (written by Nigel Tao) which was originally published under the Apache 2.0 license.
License
Licensed under either of Apache License, Version 2.0 or MIT license at your option.Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in this repository by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
