Converting Sandy Bridge TSC to wall clock time

Converting Sandy Bridge TSC to wall clock time

I am having trouble converting a difference of two samples obtained by
the rdtsc instruction to a wall clock time on a Sandy Bridge desktop
(Core i7-2600K) and am wondering if someone can help me understand
what I am doing wrong.

If I run a simple program that captures the TSC, does a sleep(1), does another
capture of the TSC, and then prints the result, I get a value very close to the
following each time:
~3502401752

This says to me that the Sandy Bridge Invariant TSC is ticking at about 3.502 GHz.

Page B-136 of Vol3 of the Intel 64 and IA-32 Architectures Software Developers
Manual details Sandy Bridge MSR CEH. The relevant field is:
15:8 Package Maximum Non-Turbo Ratio. (R/O)
The is the ratio of the frequency that invariant
TSC runs at. Frequency = ratio * 100 MHz.

Reading this MSR on my platform, I get a value of 0x0000100060012200.
Maximum Non-Turbo Ratio: 0x22 = 34 * 100 = 3.400 GHz
-- This appears to be off by 100 MHz

According to Intel's doc for the Core i7-2600K, the processor has a maximum
turbo speed of 3.8 GHz.
http://ark.intel.com/Product.aspx?id=52214
Reading MSR 1ADH confirms this:
0000000023242526
0x26 = 38 * 100 = 3.8 GHz (max for 1 core)
0x25 = 37 * 100 = 3.7 GHz (max for 2 cores)
0x24 = 36 * 100 = 3.6 GHz (max for 3 cores)
0x23 = 35 * 100 = 3.5 GHz (max for 4 cores)

Am I doing something wrong in trying to compute the expected tick rate of
the Invariant TSC? Did the BIOS program something incorrectly?

11 Beiträge / 0 neu
Letzter Beitrag
Nähere Informationen zur Compiler-Optimierung finden Sie in unserem Optimierungshinweis.

Do I misunderstand what you are saying? It seems you are measuring the TSC count before and after an unspecified implementation of sleep(1) and claiming that should indicate the TSC rate. Did you perform an independent check on sleep() to find the accuracy of its intervals, including possible latency effects in entering and leaving a sleep state?
Among the more usual ways of checking TSC rate is to run a loop which keeps the CPU in active state over a similar or longer time, and compare the TSC count against gettimeofday().

You understand correctly. We simplified the observation to just using sleep(1) after running into the issue in much more complicated code. The latency effects of sleep() are nominal compared to how far the ticks are off from the expected. Switching to gettimeofday() does not offer significantly more resolution than the simple example, and still suffers the same vulnerability -- the OS clock could be inaccurate.

Regardless, I changed the code to spin on gettimeofday() for at least 10 seconds() and then measured the results. I got the following:
TSC ticks per sec = 3502037486
where previously I got:
TSC ticks per sec = 3502401752

So accuracy improved by 0.01%, whereas this "more accurate" number of ticks is still off from the expected by 3%. I have access to several Nehalem hosts which do not seem to suffer from this issue.

To verify the clock of the host OS is functioning correctly, I ran another experiment where I increased
the test interval to 15 minutes and did the following:
date; ssh testhost run_tsc_test; date

If the clock were off by 3%, I should see a 27 second skew between what testhost reports for the
time and what the local machine reports. The following output shows no skew:
Tue Apr 26 14:07:21 PDT 2011
Total TSC ticks = 3152757743076
Time = 900.265 sec
TSC ticks per sec = 3502034495
Tue Apr 26 14:22:22 PDT 2011

For reference, I'll include the source I used:
#include
#include
#include
#include

/*
* Read the 64-bit TSC (time stamp counter).
*/
static uint64_t
rdtsc(void)
{
unsigned long msw;
unsigned long lsw;

__asm__ __volatile__("rdtsc" : "=a" (lsw), "=d" (msw) :);
return (((uint64_t) msw << 32) | lsw);
}

int
main(int argc, char *argv[])
{
uint64_t start;
uint64_t end;
struct timeval tv_start;
struct timeval tv_end;
double sec;

gettimeofday(&tv_start, NULL);
start = rdtsc();
while (1) {
gettimeofday(&tv_end, NULL);
if (tv_end.tv_sec > tv_start.tv_sec + 900)
break;
}
end = rdtsc();
printf("Total TSC ticks = %"PRIu64"\n", end - start);
sec = (tv_end.tv_sec - tv_start.tv_sec) +
(double) (tv_end.tv_usec - tv_start.tv_usec) / 1000000;
printf("Time = %.3lf sec\n", sec);
printf("TSC ticks per sec = %.0lf\n", (double) (end - start) / sec);
}

>>number of ticks is still off from the expected by 3%

Might this be 24/1024ths or 24/1000ths?

IOW an assumption of one party of the other party's value for K

Jim Dempsey

www.quickthreadprogramming.com

> Might this be 24/1024ths or 24/1000ths?

Thanks for your reply. Possibly, but not probably. GHz is always specified
in base 10, where as you know GB suffers from the base 2/base 10 confusion.

Best Reply

two options:
1) rdtsc is increment bya mutiplierevery bclk - in your case the multiplier should be 34, your assumption is that the bclk is 100 MHz, but it could be higher (overclocking - check bios). you can read unhalted refence clock rate from the event monitor (bclk)
2) K part is unlocked, have you changed your base multiplier? look at MSR_PERF_STAT[31], if set the rdtsc is incremented by MSR_PERF_STAT[44:40]

1) This is a great tip. Thank you. I'll check out the BIOS settings when I'm back at work next week.
I would like to understand better your last sentence. Are you saying there is an MSR or device
register that I can read that might provide the bclk value? That would be the best situation for me.

2) The MSR_PERF_STAT[31] is not documented for Sandy Bridge, though I do see it in the Core Microarchitecture section. As we both know, it's quite possible Intel just hasn't documented this register fully for Sandy Bridge.

If I read MSR 0x198, I get the following value: 0x00001f8b00001000. Bit 31 is not set. Bits40-44 have a value of 0x1f, which is 31. So that's a little confusing as it's below 34.

If I run dmidecode, I get the following relevant output from the BIOS:
Version: Intel Core i7-2600K CPU @ 3.40GHz
Signature: Type 0, Family 6, Model 42, Stepping 7
Voltage: 1.0 V
External Clock: 100 MHz
Max Speed: 3800 MHz
Current Speed: 3400 MHz

I was hoping that the BIOS might give me a hint here that it's running the External
Clock (which I assume is BCLK) at 103 MHz. No luck, though that might just be a
static table.

I don't know if it's useful to know or not, but this is an ASUS P8H67-M motherboard, and
the BIOS probably does allow tweaking BCLK (though if this were touched, it was by the
OEM who assembled it for us):
Manufacturer: ASUSTeK Computer INC.
Product Name: P8H67-M EVO
Version: Rev 1.xx

I'll definitely check BIOS settings on Monday.

>>Bits40-44 have a value of 0x1f, which is 31. So that's a little confusing as it's below 34.

That's 5 bits. Largest number using 5 bits is 31. Perhaps this bit field is encoded.

Jim Dempsey

www.quickthreadprogramming.com

Good point, Jim. The field may be encoded on Nehalem. I've received more information
now, and bits 47:32 are dedicated to the P-State Voltage value on Sandy Bridge so
this field can't be interpreted the same as with Nehalem.

You can use perf monitoring tool (like vtune) to collect both bclk and tsc clocks (while cores are active, do while (1) on single core). the ratio between the two will give you the const multiplier. if it is 34, your bclk is probably overclocked, you can try to compare it to the acpi timer which should run at known freq

We entered the ASUS BIOS and found "Automatic Overclocking" was enabled.
Disabling this option in the BIOS appears to have resolved this issue (we now
get a TSC value of 3.4 GHz instead of 3.5 GHz when running turbostat.
I'm concluding at this point that ASUS was running BCLK at 103 MHz when in this
mode. Thank you for all of your help neni!

Melden Sie sich an, um einen Kommentar zu hinterlassen.