As discussed on the forum, the accuracy of exp() suffers when using fast-math because llvm optimizes away a numerical trick that is used for rounding:
https://discourse.julialang.org/t/whats-going-on-with-exp-and-math-mode-fast/64619/21
While some inaccuracy is expected when using fast-math, it seems like this specific situation could be improved by instructing llvm to not optimize away the numerical rounding trick.
There's probably a better way to do this, but I was able to get exp() working with fast-math by replacing the subtraction:
N_float -= MAGIC_ROUND_CONST(T)
with
N_float = forcesub(N_float, MAGIC_ROUND_CONST(T))
where
function forcesub(a,b)
Base.llvmcall("%3 = fsub double %0, %1\n ret double %3", Float64, Tuple{Float64,Float64}, a, b)
end
Before (with --math-mode=fast):
julia> exp(1.0)
2.7158546124258023
julia> exp(1e-3)
1.0
After (with --math-mode=fast):
julia> exp(1.0)
2.718281828459045
julia> exp(1e-3)
1.0010005001667084
As discussed on the forum, the accuracy of exp() suffers when using fast-math because llvm optimizes away a numerical trick that is used for rounding:
https://discourse.julialang.org/t/whats-going-on-with-exp-and-math-mode-fast/64619/21
While some inaccuracy is expected when using fast-math, it seems like this specific situation could be improved by instructing llvm to not optimize away the numerical rounding trick.
There's probably a better way to do this, but I was able to get exp() working with fast-math by replacing the subtraction:
N_float -= MAGIC_ROUND_CONST(T)with
N_float = forcesub(N_float, MAGIC_ROUND_CONST(T))where
Before (with --math-mode=fast):
After (with --math-mode=fast):