Avoiding the Cost of Branch Misprediction

Introduction

by Rajiv Kapoor

Today's modern processors derive much of their performance by executing instructions in parallel and before the time when their results are actually needed. When a sequential block of code is executed, the processor has full visibility of the next set of instructions it needs to execute, and after evaluating their interdependencies, it executes them. However, when a comparison instruction and a branch based on its result are encountered, the processor does not know which set of instructions to execute after the branch until it knows the result of the comparison instruction. The next set of instructions could be either the ones immediately following the branch target address, if the branch is taken, or they could be the ones following the branch instruction, if the branch is not taken. This causes a stall in the processor's execution pipeline. To avoid such stalls, the processor makes an educated "guess" of the result of the comparison and executes the instructions that it thinks need to be next. These predictions are based on past history of that particular branch instruction that is maintained in the processor's branch history buffer. This article discusses problems encountered as a consequence of branch mispredictions and offers the reader approaches for avoiding the performance costs that come with these events.


The Problem With Branches

Predicting the branch based on history results in a correct guess only if the branch has been visited before and behaves in some sort of a consistent pattern. A typical case of unpredictable branch behavior is when the comparison result is dependent on data. When the prediction made by the processor is incorrect, all the speculatively executed instructions are discarded as soon as the correct branch is determined, and the processor execution pipeline restarts with instructions from the correct branch destination. This halt in productive work while the new instructions work their way down the execution pipeline causes a processor stall, which is a major drain on performance.

No Branches Means No Mispredicts

One of the most effective techniques of reducing processor stalls due to mispredicted branches is to eliminate the branches altogether. Removing the branches not only improves runtime performance of the code, it also helps the compiler to optimize the code. When branch removal is not possible, the branch can be coded to help the processor predict it correctly based on the static branch prediction rules. The references below provide more information on static branch prediction rules.

Which Branches To Remove

Branches are any decision points in a program. Branches occur at every if and case statement, in every for and do-while loop, and every goto statement, among others. Some of these, such as the goto statement, are unconditional, are always predicted correctly, and need no optimization. Others, especially branches that rely on data values, can be very difficult to predict. Such data-dependent branche s are the best ones to target for removal.

Removing all branches may not be very useful. To make the right decision, consider the correlation between the execution time spent in the code block containing the branches and the rate of misprediction of those branches. Use a tool such as the Intel® VTune™ Performance Analyzer, to measure two metrics for the code block - percentage of execution time spent, and the ratio of branches mispredicted to branches retired (also known as the branch mispredict ratio). As a rule of thumb, remove or optimize a branch if the branch mispredict ratio is greater than about 8% and the location of the branch is a hotspot in the execution profile relative to the rest of the code. It is important to consider both the metrics together because a branch location may be an execution hotspot simply due to the fact that it is execute frequently, even though it is never mispredicted.


Branch Elimination By Example

Let's consider some simple synthetic code snippets to illustrate several techniques of branch removal. The following function shows three code segments, appropriately commented as segments 1, 2 and 3. These code segments or blocks will be used to illustrate different ways of removing branches in the following sections. For simplifying the example, it is assumed that size is a multiple of 4 and all pointer input parameters to the function are 16 byte aligned.

void MyFunc ( int

size, int blend, float *src, float *dest, float *src_1, ... )

{

  int j;

  // Code segment 1

  for (j = 0; j < size; j++ )

  {

    if ( blend == 255 ) // invariant branch

      dest[j] = src_1[j];

    else if ( blend == 0 )

      dest[j] = src_2[j];

    else

      dest[j] = ( src_1[j] * blend + src_2[j] * (255-blend) ) / 256;

  }


  // Code segment 2

  for (j = 0; j < size; j++ ) // assume (size % 4) = 0

  {

    if ( src[j] > dest[j] )

    {

      minVal[j] = dest[j];

      maxVal[j] = src[j];

    }

       else

    {

      maxVal[j] = dest[j];

      minVal[j] = src[j];

    }

  }


  // Code segment 3

  for (j = 0; j < size; j++) // Assume (size % 4) = 0

    if (src[j] < 0.0)

      src[j] = -1.0 * src[j];

}

 

Code Segment 1

This segment shows a simple case where a busy loop contains invariant branches inside the loop. This code is the kind of loop all C and C++ programmers have written hundreds if not thousands of times. Compare it with this segment:

     void MyFunc ( int

     size, int blend, float *src, float *dest, float *src_1, ...
    )


     {


       int j;




       // Code segment 1


       if ( blend = 255 ) // invariant branch before
    loop


         for ( j = 0; j < size; j++ )


           dest[j] = src_1[j];


       else if ( blend == 0 ) // invariant branch before
    loop


         for ( j = 0; j < size; j++ )


           sest[j]= src_2[j];


       else


         for ( j = 0; j < size; j++ )


           dest[j] = ( src_1[j] *
    blend + src_2[j] * (255-blend) ) / 256;



     .........



     }

 

The same operations are still performed, but none of the loops have any comparison within them. This means that the loops can be optimally executed by the processor without decreasing performance due to mispredicted branches. This change alone will significantly increase the performance of these loops.

Code Segment 2

This segment shows code that calculates the minimum and maximum values from two arrays. By using Single Instruction Multiple Data (SIMD) operations, this kind of branchy code can be easily converted into logical operations. One way to do this would be to use the mask generating compare instructions in the Streaming SIMD Extension (SSE) instruction set offered on the Pentium® III and later processors. Here is an implementation using intrinsics that can be called in C for the SSE instructions that are supported by Intel and Microsoft C/C++ compilers.

     void MyFunc ( int

     size, int blend, float *src, float *dest, float *src_1, ...
    )

     {

       int j;

       .........

       // Code segment 2


       __m128 src4, dest4, min4, max4, mask4, max4_s,
    max4_d, min4_s, min4_d;

       for (j = 0; j < size/4 ; j++, src+=4, dest+=4,
    minVal+=4, maxVal+=4)


       {

           src4 = _mm_load_ps(src);
    // load the 4 src values


           dest4 = _mm_load_ps(dest);
    // load the 4 dest values


           mask4 =
    _mm_cmpgt_ps(src4,


     dest4); // Fs if (src > dest) else 0s

           max4_s = _mm_and_ps(mask4,
    src4); // all max vals from src set


           min4_d = _mm_and_ps(mask4,
    dest4); // all min vals from dest set

           max4_d =
    _mm_andnot_ps(mask4, dest4);// all max vals from dest set


           min4_s =
    _mm_andnot_ps(mask4, src4); // all min vals from src set

           max4 = _mm_or_ps(max4_s,
    max4_d); // all max values set

           min4 = _mm_or_ps(min4_s,
    min4_d); // all min values set

           _mm_store_ps(minVal,

     min4); // store the 4 mins


           _mm_store_ps(maxVal,
    max4); // store the 4 maxs


       }

       .........

     }

 

This can also be implemented in a more efficient way by using the SIMD min and max instructions provided by the SSE instructions. The following code shows the implementation using the _mm_min_ps() and _mm_max_ps() intrinsics.

     void MyFunc ( int

     size, int blend, float *src, float *dest, float *src_1, ...
    )

     {

       int j;

       .........


       // Code segment 2


       __m128 src4, dest4, min4, max4;

       for (j = 0; j < size/4 ; j++, src+=4, dest+=4,
    minVal+=4, maxVal+=4)

       {

           src4 = _mm_load_ps(src);
    // load the 4 src values

           dest4 = _mm_load_ps(dest);
    // load the 4 dest values

           min
4 =
    _mm_min_ps(src4,

     dest4); // calculate the 4 mins


           max4 = _mm_max_ps(src4,
    dest4); // calculate the 4 maxs


           _mm_store_ps(minVal,

     min4); // store the 4 mins


           _mm_store_ps(maxVal,
    max4); // store the 4 maxs

       }

     ... ...

     }

 

Code Segment 3

This piece of code calculates the absolute values of those stored in the array src. The most obvious way to remove the branch here is to use the fabs() intrinsic as:

     {

       .........

       // Code segment 2


       for (j = 0; j < size; j++)

           src[j] = (float)
    fabs((double)src[j]);

     }

 

Using SSE, the code could be further optimized to process four values at one time and further reduce the branches due to the loop execution, since the loop count is now a fourth of the original. Here is an implementation that exploits the fact that calculating the absolute value of a single precision floating point number is just a matter of clearing bit 31 of the floating point representation.

     {

       .........

       // Code segment 2


       __declspec(align (16)) int temp[4] =

             {0x7FFFFFFF,
    0x7FFFFFFF, 0x7FFFFFFF, 0x7FFFFFFF};

       __m128 const_sign_bit_0

     = mm_load_ps((float *) temp);

       for (j = 0; 
j < size/4; j++, src+=4)


       {

           src4 = _mm_load_ps(src);
    // load the 4 src values


           src4 = _mm_and_ps(src4,
    const_sign_bit_0); // clear bit 31

           _mm_store_ps(src, src4);
    // store the 4 values back

       }

     }

 


Conclusion

The Occasional Cost Of Branch Removal

Branch removal may not always be free. Consider the optimized versions of code segment 3 shown in the previous section. In the optimized versions, all src values are processed as opposed to the original code, which only processes the negative values. If the values are non-negative in a majority of the cases, then we will be doing more processing in the optimized code compared to the original code. Furthermore, when using SSE2 or SSE instructions, one must be aware of the time spent in "setup" code as opposed to the real intended operation. Setup code could be loading and/or rearranging of data to make it amenable to SIMD operations that follow the setup. Always try to concatenate as many SIMD operations as possible to amortize the fixed cost of setup.

Resources

For more information on this topic, please see the Optimization manual for the Pentium® 4 and Intel® Xeon® processors, as well as the processor manuals, that contain significant helpful information. They can be downloaded at: http://developer.intel.com/products/processor/manuals/index.htm

Also see Richard Gerber's book, The Software Optimization Cookbook, Intel Press, 2002, which inspired the code in the first example.

About the Author

Rajiv Kapoor is an applications engineer working for Intel's Software and Solutions group.

This programming note was based in part on a presentation made by Rajiv Kapoor of Intel in September 2001. It was abstracted into an article by Stylus Group Co.*, a division of Pacific Data Works LLC*.


For more complete information about compiler optimizations, see our Optimization Notice.

1 comment

Top
drlemon's picture

Thank you for a nicely illustrated article, Rajiv! I am wondering about your examples 2 and 3, is it possible to re-write the code in C without using intrinsic instructions, so that an optimizing compiler will implement vectorization?

Add a Comment

Have a technical question? Visit our forums. Have site or software product issues? Contact support.