Advisor Lite methodology - old fine wine in new bottle.

Imagine going to the moon without first testing and rehearsing a lot of the possible scenarios in a simulator here on Earth! Many would consider it laughable. But most, if not all, developers actually parallelize code without first modeling code for both performance and correctness before parallelizing the code. Even though software product development can sometimes be like rocket science, thanks to Intel Parallel Advisor Lite moving to parallel code does not have to be rocket science.

It can be quite time consuming and costly to parallelize code without modeling it first. This is due to the overhead of identifying candidate locations and then the overhead of actually incorporating parallel code in each location. If not all candidates are explored, this can result in huge missed opportunities. Also, correctness errors can become latent in parallel code. The errors may not manifest easily with usual testing resulting in costly software development runoffs when they eventually surface down the development timeline or even after the product is released to customers. Fortunately, Parallel Advisor Lite offers a better way.

In very simple terms, you can think of Parallel Advisor Lite as a simulator for performance and correctness when  attempting to parallelize code. Using Advisor Lite, you can "pretend" to incorporate parallelism using Advisor Lite annotations (sort of like C/C++ macros) in different portions of your code to find out if parallelization would pay off and the best part is that you can also find out if the code would work correctly. This approach is very effective as the annotation statements are rather simple to use and very high level. Annotations can be inserted and removed rather easily as you try different candidate sites where parallelism can be incorporated. Annotations are also transparent to your traditional build-test infrastructure and this means that you can use your existing test infrastructure to sanity check any correctness changes you make.

Parallel Advisor Lite espouses proven practices for parallelizing code. Parallel Advisor Lite makes it really convenient to follow these proven practices with a workflow within the tool and helps you gain the discipline to apply them  consistently. The workflow, in its essence, is composed of the following high level steps:

    • Identify parallel candidates ("hotspots") in your code

    • Model parallelism to see which of the candidate hotspots may actually "pay off"

    • Model correctness to discover data sharing issues so that you can fix them first



You can then incorporate actual parallel code with greater confidence. If this approach seems intuitive and familiar to you, it is because these are proven practices! The key value propositions here are:

    • the discipline gained in doing correctness modeling helps improve overall quality by reducing the number of correctness issues

    • you can be more confident when parallelizing your code. You no longer need to be tentative.



Consider the traditional approach of first parallelizing code (using Windows native threads or Posix threads or even threading libraries like TBB and OpenMP) and then testing it for correctness using developer unit tests or system tests. The disadvantage with this approach is that threading errors like data races and deadlocks may or may not be exposed by the testing. If you are more disciplined and use tools like Intel Thread Checker or Intel Parallel Inspector, this problem can be slightly alleviated. But the problem with this approach is that it is difficult to "test drive" parallelization in different "candidate" locations of your code. With Parallel Advisor Lite, both the problems are alleviated. You can not only "test drive" different parallel candidate locations but you can discover and resolve correctness issues as well!

Parallel Advisor Lite is thus a parallelism modeler and a very unique tool (probably the first of its kind). If you have not yet tried it, I strongly encourage you to test drive the tool. If you have already tried it, what does it do well and what does it not do well? What do you like most about the tool? Please let us know!

For more complete information about compiler optimizations, see our Optimization Notice.