I'm having trouble comparing 2 floating point numbers on different hardware.
I have a unit vector defined. I want to take the dot product of the vector with itself. The answer should be 1 since the vectors are identical. I have verified that the magnitude of the vector is 1 to 16 decimal places. I can compile the code on my local machine and I see that the dot product is 0.9999999999999999. Then I deploy the same code to a server and run it and the result is 1.0000000000000002. The end goal is to use the intrinsic ACOS function to get and angle, but since the server value is greater than 1, I end up with NaN.
I had optimizations turned on at first, but after turning all of them off (I think) the problem still exists. I am using the strict floating point model, and as I said, the dll is the exact same. Are there any compiler optoins, or anything I can do to help the situation? If not, does anyone understand why this would happen? I understand that different architectures can give different results but it's strange to me the the intrinsic dot product isn't really working. I'm sure this is a somewhat common question, but I've done some digging and I haven't found a satisfactory answer. I appreciate any help and thanks in advance.