I came across a clever method for approximate float comparison which exploits their binary representation:
bool almost_equal(float a, float b) {
uint32_t abits = to_bits(a);
uint32_t bbits = to_bits(b);
return abits - bbits < 4;
}
Where 4 is the tolerance in "units in the last place".
to_bits() isn't a simple bit_cast. It also does some magic with the sign bit. But for now let's only consider positive floats...
The question on my mind was: does this give the correct distance even around the boundary of an exponent bump?
And yes: mantissa and exponent are arranged in such a way that adding 1 to the binary form always yields the next float:
>>> import struct
>>> to_float = lambda i: unpack(">f", i.to_bytes(4))[0]
>>> to_float(0x3FFFFFFF)
1.9999998807907104
>>> to_float(0x40000000)
2.0
>>> to_float(0x40000001)
2.000000238418579
Of course this also works with double and long double.
My production version of almost_equals() is templated using C++20 concepts and the nicer "auto" syntax:
bool almost_equals(std::floating_point auto a, std::floating_point auto b) {
auto abits = to_bits(a);
auto bbits = to_bits(a);
return distance(abits, bibits) < 4;
}
For more information, consult your local encyclopedia:
https://en.wikipedia.org/wiki/Single-precision_floating-point_format
@codewiz
The distance function only looks at the last bits?
If it looks at the first bits then we're distancing the base.
@Maryam It compares all 32 bits of the floats (there's no masking), but it allows the difference to be up to 4 ULPs = 2 least significant bits.