Developer Guide and Reference

Contents

Inline Expansion of Functions

Inline function expansion does not require that the applications meet the criteria for whole program analysis normally required by IPO; so this optimization is one of the most important optimizations done in Interprocedural Optimization (IPO). For function calls that the compiler believes are frequently executed, the Intel® compiler often decides to replace the instructions of the call with code for the function itself.
In the compiler, inline function expansion is performed on relatively small user functions more often than on functions that are relatively large. This optimization improves application performance by performing the following:
  • Removing the need to set up parameters for a function call
  • Eliminating the function call branch
  • Propagating constants
Function inlining can improve execution time by removing the runtime overhead of function calls; however, function inlining can increase code size, code complexity, and compile times. In general, when you instruct the compiler to perform function inlining, the compiler can examine the source code in a much larger context, and the compiler can find more opportunities to apply optimizations.
Specifying
the
[Q]ip
compiler option, single-file IPO, causes the compiler to perform inline function expansion for calls to procedures defined within the current source file; in contrast, specifying
the
[Q]ipo
compiler option, multi-file IPO, causes the compiler to perform inline function expansion for calls to procedures defined in other files.
Using the
[Q]ip
and
[Q]ipo
(Windows*) options can, in some cases, significantly increase compile time and code size.
The Intel compiler does a certain amount of inlining at the default level.
Although such inlining is similar to what is done when you use the [Q]ip option, the amount of inlining done is generally less than when you use the option.

Selecting Routines for Inlining

The compiler attempts to select the routines whose inline expansions provide the greatest benefit to program performance. The selection is done using default heuristics.
The inlining heuristics used by the compiler differ based on whether or not you use options for Profile-Guided Optimizations (PGO):
[Q]prof-use
compiler option.
When you use PGO with
[Q]ip
or
[Q]ipo
, the compiler uses the following guidelines for applying heuristics:
  • The default heuristic focuses on the most frequently executed call sites, based on the profile information gathered for the program.
  • The default heuristic always inlines very small functions that meet the minimum inline criteria.
Using IPO with PGO
Combining IPO and PGO typically produces better results than using IPO alone. PGO produces dynamic profiling information that can usually provide better optimization opportunities than the static profiling information used in IPO.
The compiler uses characteristics of the source code to estimate which function calls are executed most frequently. It applies these estimates to the PGO-based guidelines described above. The estimation of frequency, based on static characteristics of the source, is not always accurate.