Accuracy-overhead tradeoff

Accuracy-overhead tradeoff

Hello everyone,

I am thinking of using performance counters to do some profiling related work. basically I wanted to know the basic-block execution frequencies and the branch biases (i.e. always taken oralways not-taken). AFAIK I should be using the sampling based approach to do this. Since sampling based approaches can be less accurate as the sampling interval increases, I was wondering if anyone has done a tradeoff study looking at the overhead based on sampling rate vs. accuracy for the kind of profiling that i'm interested in.

Any pointers to related material or comments would be helpful..


1 post / 0 new
For more complete information about compiler optimizations, see our Optimization Notice.