gettimeofday is the default timer on Linux* OS with _ftime being the equivalent on Microsoft* Windows* OS. Its API limits the clock resolution to 1μs, but depending on which timer the OS actually uses the clock resolution may be much lower (_ftime usually shows a resolution of only 1 millisecond). It is implemented as a system call; therefore it has a higher overhead than other timers.

In theory the advantage of this call is that the OS can make better use of the available hardware, so this timer should be stable over time even if NTP is not running. However, Figure 5.4 shows that in practice at least on that system quite a high deviation between different nodes occurred during the run.

If NTP is running, then the clock of each node might be modified by the NTP daemon in a non-linear way. NTP should not cause jumps, only accelerate or slow down the system time.

Figure 5.2 CPU Timer: clock Transformation and the Sample Points It Is Based on

However, even decreasing system time stamps have been observed on some systems. This may or may not have been due to NTP.

Due to the clock synchronization at runtime enabling NTP did not make the result worse than it is without NTP (Figure 5.5). However, NTP alone without the additional intermediate synchronization would have led to deviations of nearly 70 μs.

So the recommendation is to enable NTP, but intermediate clock synchronization by Intel® Trace Collector is still needed to achieve good results.

For more complete information about compiler optimizations, see our Optimization Notice.