Advisor - not geared to detect for loop parallellism

Advisor - not geared to detect for loop parallellism

michele-delsol's picture

Hello,

I tried Adivsor (latest release update 1 - VS 2008 - Intel C++ 12) on an app which has four functions which run serially and which contain for loops. When annotating on tasks I getrealistic results from parallelising. However if I annotate on the for loops instead of the functions, I get a 2X projected improvementwhich stays constant whether I run on 2 procs or 32 procs.

With for loop parallelizations I should be getting a linear increase with the # of procs.

It seems to me that Advisor is missing a for loop oriented annotation. Annotating the for loop with the TASK annotations seems to not yield a correct suitability report. Advisor seems to need more work to address for loops.

Or am I missing something ?

Thanks,
Michle

4 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.
RAVI (Intel)'s picture

Hello Michle,

Thank you for taking the time to let us know of your feedback. We appreciate it.

Would it be possible for you to share the test program so that I can investigate further? You can send me an email at "ravi DOT vemuri AT intel DOT com" if you'd like to provide the test program.

Thank you!

Ravi

michele-delsol's picture

Ravi,

Thanks for your prompt response.

Here is the code - same as my post on Advisor Correctness issue.

double sum = 0.0;

static double step = 1.0/(double) ITERATIONS;
double x;

ANNOTATE_SITE_BEGIN( BoucleFor );
for (int i=0; i< ITERATIONS; i++){
ANNOTATE_TASK_BEGIN(BoucleForA);
x = ((double)i+0.5)*step;
sum += 4.0/(1.0 + x*x);
ANNOTATE_TASK_END(BoucleForA);
}
ANNOTATE_SITE_END( BoucleFor );
pi = step * sum;

When dealing with tasks, Advisor does this quite well as far as I could determine. However, it is not at all suited to take into account data based parallelism. It will report loops that consume CPU time however when annotating and inspecting, it will do so as if the loop where a single task running in parallel with other tasks, not as a piece of code running in parallel with itself as loops are supposed to do. It will consequently not reflect speed improvements as the number of procs increases and whats worse, it will not detect data access problems.

A specific loop based annotation seems to be needed.

Please consider my previous post on Advisor correctness closed as this post covers both correctness and suitability.

Thanks,
Michle

RAVI (Intel)'s picture

Hello Michele,

There are two issues here.

The suitability issue looks like a bug, we will look further into that. We do expect Advisor to be helpful for data parallelism, but it appears that when the work items are very small the measurements are skewing the data unacceptably.

The correctness issue is intentional, and has to do with the limits of the techniques available to us. Consider this code:

double x1;

ANNOTATE_SITE_BEGIN( BoucleFor );

for (int i=0; i< ITERATIONS; i++){

ANNOTATE_TASK_BEGIN(BoucleForA);

double x = ((double)i+0.5)*step;

sum += 4.0/(1.0 + x*x);

ANNOTATE_TASK_END(BoucleForA);

}

ANNOTATE_SITE_END( BoucleFor );

Now, "x" is local to a task. But the compiler is going to allocate "x" in the frame of the containing routine, just as it was in your example, where it was declared outside the site (where x1 is declared here). Because Advisor works on the serial code, it cannot tell by looking at the binary where the variable was declared. We have to choose between reporting false-positives and reporting false-negatives. Because most languages (like OpenMP) either detect this problem, or (like TBB) force the programmer to deal with it in translation, we opted for missing races on local variables rather than reporting false-positives (for instance, on compiler stack temporaries), which would be very confusing for the user. You can see this in action if you change x to be static double x, we will report the race.

Again, we appreciate the time you took to provide feedback. Please do not hesitate to let us know if you have any additional questions or concerns.

Happy holidays!

Ravi

Login to leave a comment.