Untill recently I was inclined to fully agree with your explanation. However I noticed that other codes doing similar type of calculations keep the results equal for different compilers within 8 figures. The thing is that this code is not to be run for 500 iterations but millions instead, after which the results are then completely different. I don't feel very confortable with it, since then the question arises of which results should I use.
e.g. for 50 000 iterations the final result of two variables are:
compaq final: 0.542708150408233E+02 0.418531824196778E+03
Intel final: 0.106917608168976E+04 0.156911394876724E+04
After a certain number of iterations the basic variables change too much and the two runs take completely different ways. There is some "error" propagation that takes the final results to be completely different. In other codes of the same type this does not happen.
Given the results above I believe that perhaps something is wrong with the code.
Thanks a lot in advance.