I am having trouble converting a difference of two samples obtained by
the rdtsc instruction to a wall clock time on a Sandy Bridge desktop
(Core i7-2600K) and am wondering if someone can help me understand
what I am doing wrong.
If I run a simple program that captures the TSC, does a sleep(1), does another
capture of the TSC, and then prints the result, I get a value very close to the
following each time:
This says to me that the Sandy Bridge Invariant TSC is ticking at about 3.502 GHz.
Page B-136 of Vol3 of the Intel 64 and IA-32 Architectures Software Developers
Manual details Sandy Bridge MSR CEH. The relevant field is:
15:8 Package Maximum Non-Turbo Ratio. (R/O)
The is the ratio of the frequency that invariant
TSC runs at. Frequency = ratio * 100 MHz.
Reading this MSR on my platform, I get a value of 0x0000100060012200.
Maximum Non-Turbo Ratio: 0x22 = 34 * 100 = 3.400 GHz
-- This appears to be off by 100 MHz
According to Intel's doc for the Core i7-2600K, the processor has a maximum
turbo speed of 3.8 GHz.
Reading MSR 1ADH confirms this:
0x26 = 38 * 100 = 3.8 GHz (max for 1 core)
0x25 = 37 * 100 = 3.7 GHz (max for 2 cores)
0x24 = 36 * 100 = 3.6 GHz (max for 3 cores)
0x23 = 35 * 100 = 3.5 GHz (max for 4 cores)
Am I doing something wrong in trying to compute the expected tick rate of
the Invariant TSC? Did the BIOS program something incorrectly?