# Mathematical Parallelization By Compilers

This is not to say that compilers can automatically parallelize code. I would however really like to see that happen and here is an interesting and reliable way to parallelize operations. If a compiler can use this method of thinking then it can also be used as hints for developers writing code today.

C and C++ languages are based on mathematical expressions. So much so that 1; is a legal operation in C\++. Other languages such as C#, Java, VB and Delphi also use mathematical operations to before actions. For example:
MyInterger = GetCount() + GetLength()
Both are function calls that do some work.

Many mathematical operations are interchangeable for example X = 1 + 2 is the same as X = 2 + 1. This means that:
X = CallFunc_A() + CallFunc_B()
is the same as:
X = CallFunc_B() + CallFunc_A()

This is a hint telling us that CallFunc_A and CallFunc_B can run in parallel (unless internal resources are shared).

A more complex example would be: X = 3 * (1 + 2), and the hint now is that one operation needs to complete before the other can continue.
Here is the code equivalent:
X = CallFunc_C() * ( CallFunc_A() + CallFunc_B() )

Here is another:
X = CallFunc_C( CallFunc_A() + CallFunc_B() )

These concepts apply to logical operations as well.

Looks like as a general rule, when we close braces we have a Conjunction Point (a Join). It makes sense because we don't really need to make sure that an operation is complete until we need its return value.

The question we have left now is how can we cancel an operation when it is no longer used, for example X = A or B. what if we execute A and B in parallel and B returned TRUE while A is still executing.

Categories:
Tags:
For more complete information about compiler optimizations, see our Optimization Notice.

Hi Asaf.

This is not a new idea, with a whole program analysis (for most languages) and with a very simple analysis for pure declarative languages we can easily determine which operations are independent and therefore decide to run them in parallel. Furthermore, futures[1] can be used where computations are dependent to create safe, dependent parallelism.
This can be done automatically - the programmer need not be aware of the communication that their program performs[2].

The tricky part is knowing which parallelisations make the program faster, and which make it slower. This is very important, I have only four cores in my system, so I don't need much parallelism, I also want parallelism that has smaller overheads or doesn't block for long. But I believe it can be done, and can be done by the compiler to automatically parallelise my code; this is the topic of my PhD thesis and more details can be found in [3].

Regarding possibly canceling tasks where the answer is no-longer needed such as in the case of the boolean operation; I perfer never to parallelise them to begin with. This is speculative execution and can easily be wasteful if done too aggressively. Concentrate on introducing parallelism in the cases where you _know_ you'll need the answer, then benchmark the program and determine if it's not yet fast enough before handling tricker cases like this.

Enjoy.