Mathematical Parallelization By Compilers

This is not to say that compilers can automatically parallelize code. I would however really like to see that happen and here is an interesting and reliable way to parallelize operations. If a compiler can use this method of thinking then it can also be used as hints for developers writing code today.

C and C++ languages are based on mathematical expressions. So much so that 1; is a legal operation in C\++. Other languages such as C#, Java, VB and Delphi also use mathematical operations to before actions. For example:
MyInterger = GetCount() + GetLength()
Both are function calls that do some work.

Many mathematical operations are interchangeable for example X = 1 + 2 is the same as X = 2 + 1. This means that:
  X = CallFunc_A() + CallFunc_B()
is the same as:
  X = CallFunc_B() + CallFunc_A()

This is a hint telling us that CallFunc_A and CallFunc_B can run in parallel (unless internal resources are shared).

A more complex example would be: X = 3 * (1 + 2), and the hint now is that one operation needs to complete before the other can continue.
Here is the code equivalent:
  X = CallFunc_C() * ( CallFunc_A() + CallFunc_B() )

Here is another:
  X = CallFunc_C( CallFunc_A() + CallFunc_B() )

These concepts apply to logical operations as well.

Looks like as a general rule, when we close braces we have a Conjunction Point (a Join). It makes sense because we don't really need to make sure that an operation is complete until we need its return value.

The question we have left now is how can we cancel an operation when it is no longer used, for example X = A or B. what if we execute A and B in parallel and B returned TRUE while A is still executing.

Your thoughts?