Intel® Integrated Performance Primitives (Intel® IPP) Functions Optimized for Intel® Advanced Vector Extensions (Intel® AVX)

  • The table below reflects the Intel AVX support provided in the Intel IPP 7.0.2 library release.
  • Intel AVX optimized code is available in both the 32-bit and 64-bit editions of the 7.0 library.
  • There is very limited support for Intel AVX in the 6.1 library; if you plan to use Intel IPP on an Intel AVX platform you should upgrade to the 7.0 version of the Intel IPP library.

Intel® AVX (Intel® Advanced Vector Extensions) is a 256-bit instruction set extension to SSE designed to provide even higher performance for applications that are floating-point intensive. Intel AVX adds new functionality to the the existing Intel SIMD instruction set (based on SSE) and includes a more compact SIMD encoding format. A large number (200+) of Intel SSEx instructions have been "upgraded" in AVX to take advantage of features like a distinct destination operand and flexible memory alignment. Approximately 100 of the legacy 128-bit Intel SSEx instructions have been promoted to process 256-bit vector data. In addition, approximately 100 new data processing and arithmetic operations, not present in the legacy Intel SSEx SIMD instruction set, have been added.

The primary benefits of Intel AVX are:

  • Support for wider vector data (up to 256-bit).
  • Efficient instruction encoding scheme that supports 3 and 4 operand instruction syntaxes.
  • Flexible programming environment, ranging from branch handling to relaxed memory alignment requirements.
  • New data manipulation and arithmetic compute primitives, including broadcast, permute, fused-multiply-add, etc.



ippGetCpuFeatures()
reports information regarding the SIMD features available to your processor. Alternatively, ippGetCpuType() detects the processor type in your system. A return value of ippCpuAVX means your processor supports the Intel AVX instruction set. These functions are declared in ippcore.h.

Mask the value returned by ippGetCpuFeatures() with ippCPUID_AVX (0x0100) to determine if the Intel AVX SIMD instructions are supported by your processor (ippGetCpuFeatures() & ippCPUID_AVX is TRUE). To determine if your operating system also supports the Intel AVX instructions (saves the extended SIMD registers), mask the returned value from ippGetCpuFeatures() with ippAVX_ENABLEDBYOS (0x0200). Both conditions (i.e., CPU and OS support) must be met before your application can utilize the Intel AVX SIMD instructions.



The Intel IPP library has been optimized for a variety of SIMD instruction sets. Automatic "dispatching" detects the SIMD instruction set that is available on the running processor and selects the optimal SIMD instructions for that processor. Please review Understanding CPU Dispatching in the Intel® IPP Library for more information regarding dispatching.

Intel AVX optimization in the Intel IPP library consists of "hand-optimized" and "compiler-tuned" functions – code that has been directly optimized for the Intel AVX instruction set. Given the large number of primitives in the Intel IPP library, it is impossible to directly optimize every Intel IPP function for the large set of new instructions represented by the Intel AVX instruction set within the period of a single product release or update (processor-specific optimizations may also take into consideration cache size and number of cores/threads). Therefore, the functions in the table below represent those that either receive the greatest benefit from the new Intel AVX instructions or are the most widely used by Intel IPP customers.

If you have some specific Intel IPP functions that are not listed in the following table, and would like to see them added to the priority list for future AVX optimization, please create a thread on the IPP forum stating which functions you would like to see added to the AVX optimization priority list.

Functions directly optimized for Intel AVX are added to the table below as they become available with each new release or update of the library.

The following conventions are used in the table below to allow multiple similar functions to be denoted on a single line:

  • {x} - Braces enclose a required (function name) element.
  • [x] - Square brackets enclose an optional (function name) element.
  • | - A vertical line indicates an exclusive choice within a set of optional or required elements.
  • {x|y|z} - Example of three mutually exclusive choices within a required element in the function name.
  • [x|y|z] - Example of three mutually exclusive choices within an optional element in the function name.

Signal Processing

ippsAbs_{16s|32s|32f|64f}[_I] 
ippsAdd_{32f|32fc|64f|64fc}[_I] 
ippsAddC_{32f|64f}[_I] 
ippsAddProductC_32f 
ippsAddProduct_{32fc|64f|64fc} 
ippsAutoCorr_{32f|64f}
ippsConv_32f 
ippsConvert_{8s|8u|16s|16u|32s|64f}32f 
ippsConvert_{32s|32f}64f 
ippsConvert_32f{8s|8u|16s|16u}_Sfs 
ippsConvert_64f32s_Sfs 
ippsCopy_{16s|32s|32f|64f} 
ippsCrossCorr_{32f|64f} 
ippsDFTFwd_CToC_{32f|32fc|64f|64fc} 
ippsDFTFwd_RTo{CCS|Pack|Perm}_{32f|64f} 
ippsDFTInv_CCSToR_{32f|64f} 
ippsDFTInv_CToC_{32f|32fc|64f|64fc} 
ippsDFTInv_{Pack|Perm}ToR_{32f|64f} 
ippsDFTOutOrd{Fwd|Inv}_CToC_{32fc|64fc} 
ippsDiv[C]_32f[_I] 
ippsDotProd_32f64f 
ippsFFTFwd_CToC_{32f|32fc|64f|64fc}[_I] 
ippsFFTFwd_RTo{CCS|Pack|Perm}_{32f|64f}[_I] 
ippsFFTInv_CCSToR_{32f|64f}[_I] 
ippsFFTInv_CToC_{32f|32fc|64f|64fc}[_I] 
ippsFFTInv_{Pack|Perm}ToR_{32f|64f}[_I] 
ippsFIR64f_32f[_I] 
ippsFIR64fc_32fc[_I] 
ippsFIRLMS_32f 
ippsFIR_{32f|32fc|64f|64fc}[_I] 
ippsIIR32fc_16sc_[I]Sfs 
ippsIIR64fc_32fc[_I] 
ippsIIR_32f[_I] 
ippsLShiftC_16s_I 
ippsMagnitude_16sc_Sfs 
ipps{Min|Max}Indx_{32f|64f} 
ippsMul_32fc[_I] 
ippsMul[C]_{32f|32fc|64f|64fc}[_I] 
ippsMulC_64f64s_ISfs 
ipps{Not|Or}_8u 
ippsPhase_{16s|16sc|32sc}_Sfs 
ippsPowerSpectr_{32f|32fc} 
ippsRShiftC_16u_I 
ippsSet_{8u|16s|32s} 
ippsSqr_{8u|16s|16u|16sc}_[I]Sfs 
ippsSqr_{32f|32fc|64f|64fc}[_I] 
ippsSqrt_32f[_I] 
ippsSub_{32f|32fc|64f|64fc}[_I] 
ippsSubC_{32f|32fc|64f|64fc}[_I] 
ippsSubCRev_{32f|32fc|64f|64fc}[_I] 
ippsSum_{32f|64f} 
ippsThreshold_{32f|GT_32f|LT_32f}_[_I] 
ippsThreshold_{GT|LT}Abs_{32f|64f}[_I] 
ippsThreshold_GTVal_32f[_I] 
ippsWinBartlett_{32f|32fc|64f|64fc}[_I] 
ippsWinBlackman_{32f|64f|64fc}[_I] 
ippsWinBlackmanOpt_{32f|64f|64fc}[_I] 
ippsWinBlackmanStd_{32f|64f|64fc}[_I] 
ippsWinKaiser_{32f|64f|64fc}[_I] 
ippsZero_{8u|16s|32f}

 

SPIRAL (GEN) Functions

ippgDFTFwd_CToC_8_64fc ippgDFTFwd_CToC_12_64fc 
ippgDFTFwd_CToC_16_{32fc|64fc}
ippgDFTFwd_CToC_20_64fc
ippgDFTFwd_CToC_24_64fc
ippgDFTFwd_CToC_28_64fc 
ippgDFTFwd_CToC_32_{32fc|64fc}
ippgDFTFwd_CToC_36_64fc
ippgDFTFwd_CToC_40_64fc
ippgDFTFwd_CToC_44_64fc 
ippgDFTFwd_CToC_48_{32fc|64fc}
ippgDFTFwd_CToC_52_64fc 
ippgDFTFwd_CToC_56_64fc 
ippgDFTFwd_CToC_60_64fc 
ippgDFTFwd_CToC_64_{32fc|64fc} 
ippgDFTInv_CToC_8_64fc 
ippgDFTInv_CToC_12_64fc 
ippgDFTInv_CToC_16_{32fc|64fc} 
ippgDFTInv_CToC_20_64fc 
ippgDFTInv_CToC_24_64fc 
ippgDFTInv_CToC_28_64fc 
ippgDFTInv_CToC_32_{32fc|64fc} 
ippgDFTInv_CToC_36_64fc 
ippgDFTInv_CToC_40_64fc 
ippgDFTInv_CToC_44_64fc 
ippgDFTInv_CToC_48_{32fc|64fc} 
ippgDFTInv_CToC_52_64fc 
ippgDFTInv_CToC_56_64fc 
ippgDFTInv_CToC_60_64fc 
ippgDFTInv_CToC_64_{32fc|64fc}

 

Audio Coding

iippsDeinterleave_32f

 

Speech Coding

ippsAdaptiveCodebookSearch_RTA_32f
ippsFixedCodebookSearch_RTA_32f
ippsFixedCodebookSearchRandom_RTA_32f
ippsHighPassFilter_RTA_32f
ippsLSPQuant_RTA_32f
ippsLSPToLPC_RTA_32f
ippsPostFilter_RTA_32f_I
ippsQMFDecode_RTA_32f
ippsSynthesisFilter_G729_32f

 

Color Conversion

ippiRGBToHLS_8u_AC4R
ippiRGBToHLS_8u_C3R

 

Realistic Rendering

ipprCastEye_32f
ipprCastShadowSO_32f
ipprDot_32f_P3C1M
ipprHitPoint3DEpsM0_32f_M
ipprHitPoint3DEpsS0_32f_M
ipprMul_32f_C1P3IM

 

Computer Vision

ippiEigenValsVecs_[8u]32f_C1R 
ippiFilterGaussBorder_32f_C1R 
ippiMinEigenVal_[8u]32f_C1R 
ippiNorm_Inf_{8u|8s|16u|32f}_C{1|3C}MR 
ippiNorm_L1_{8u|8s|16u|32f}_C{1|3C}MR 
ippiNorm_L2_{8u|8s|16u|32f}_C{1|3C}MR 
ippiNormRel_L2_32f_C3CMR 
ippiUpdateMotionHistory_[8u|16u]32f_C1IR

 

Image Processing

ippiAddC_32f_C1[I]R 
ippiConvert_32f* 
ippiCopy_16s* 
ippiCopy_8u* 
ippiConvFull_32f_{AC4|C1|C3}R 
ippiConvValid_32f_{AC4|C1|C3}R 
ippiCrossCorrFull_NormLevel_16u32f_{AC4|C1|C3|C4}R 
ippiCrossCorrFull_NormLevel_32f_{AC4|C1|C3|C4}R 
ippiCrossCorrFull_NormLevel_64f_C1R 
ippiCrossCorrFull_NormLevel_8s32f_{AC4|C1|C3|C4}R 
ippiCrossCorrFull_NormLevel_8u32f_{AC4|C1|C3|C4}R 
ippiCrossCorrFull_NormLevel_8u_{AC4|C1|C3|C4}RSfs 
ippiCrossCorrFull_Norm_16u32f_{AC4|C1|C3|C4}R 
ippiCrossCorrFull_Norm_32f_{AC4|C1|C3|C4}R 
ippiCrossCorrFull_Norm_8s32f_{AC4|C1|C3|C4}R 
ippiCrossCorrFull_Norm_8u32f_{AC4|C1|C3|C4}R 
ippiCrossCorrFull_Norm_8u_{AC4|C1|C3|C4}RSfs 
ippiCrossCorrSame_NormLevel_16u32f_{AC4|C1|C3|C4}R 
ippiCrossCorrSame_NormLevel_32f_{AC4|C1|C3|C4}R 
ippiCrossCorrSame_NormLevel_8s32f_{AC4|C1|C3|C4}R 
ippiCrossCorrSame_NormLevel_8u32f_{AC4|C1|C3|C4}R 
ippiCrossCorrSame_NormLevel_8u_{AC4|C1|C3|C4}RSfs 
ippiCrossCorrSame_Norm_16u32f_{AC4|C1|C3|C4}R 
ippiCrossCorrSame_Norm_32f_{AC4|C1|C3|C4}R 
ippiCrossCorrSame_Norm_8s32f_{AC4|C1|C3|C4}R 
ippiCrossCorrSame_Norm_8u32f_{AC4|C1|C3|C4}R 
ippiCrossCorrSame_Norm_8u_{AC4|C1|C3|C4}RSfs 
ippiCrossCorrValid_{8u32f|8s32f|16u32f|32f}_C1R 
ippiCrossCorrValid_NormLevel_16u32f_{AC4|C1|C3|C4}R 
ippiCrossCorrValid_NormLevel_32f_{AC4|C1|C3|C4}R 
ippiCrossCorrValid_NormLevel_64f_C1R 
ippiCrossCorrValid_NormLevel_8s32f_{AC4|C1|C3|C4}R 
ippiCrossCorrValid_NormLevel_8u32f_{AC4|C1|C3|C4}R 
ippiCrossCorrValid_NormLevel_8u_{AC4|C1|C3|C4}RSfs 
ippiCrossCorrValid_Norm_16u32f_{AC4|C1|C3|C4}R 
ippiCrossCorrValid_Norm_32f_{AC4|C1|C3|C4}R 
ippiCrossCorrValid_Norm_8s32f_{AC4|C1|C3|C4}R 
ippiCrossCorrValid_Norm_8u32f_{AC4|C1|C3|C4}R 
ippiCrossCorrValid_Norm_8u_{AC4|C1|C3|C4}RSfs 
ippiDCT8x8FwdLS_8u16s_C1R 
ippiDCT8x8Fwd_16s_C1[I|R] 
ippiDCT8x8Fwd_32f_C1[I] 
ippiDCT8x8Fwd_8u16s_C1R 
ippiDCT8x8InvLSClip_16s8u_C1R 
ippiDCT8x8Inv_16s8u_C1R 
ippiDCT8x8Inv_16s_C1[I|R] 
ippiDCT8x8Inv_2x2_16s_C1[I] 
ippiDCT8x8Inv_32f_C1[I] 
ippiDCT8x8Inv_4x4_16s_C1[I] 
ippiDCT8x8Inv_A10_16s_C1[I] 
ippiDCT8x8To2x2Inv_16s_C1[I] 
ippiDCT8x8To4x4Inv_16s_C1[I] 
ippiDFTFwd_CToC_32fc_C1[I]R 
ippiDFTFwd_RToPack_32f_{AC4|C1|C3|C4}[I]R 
ippiDFTFwd_RToPack_8u32s_{AC4|C1|C3|C4}RSfs 
ippiDFTInv_CToC_32fc_C1[I]R 
ippiDFTInv_PackToR_32f_{AC4|C1|C3|C4}[I]R 
ippiDFTInv_PackToR_32s8u_{AC4|C1|C3|C4}RSfs 
ippiDilate3x3_32f_C1[I]R 
ippiDilate3x3_64f_C1R 
ippiDivC_32f_C1[I]R 
ippiDiv_32f_{C1|C3}[I]R 
ippiDotProd_32f64f_{C1|C3}R 
ippiErode3x3_64f_C1R 
ippiFFTFwd_CToC_32fc_C1[I]R 
ippiFFTFwd_RToPack_32f_{AC4|C1|C3|C4}[I]R 
ippiFFTFwd_RToPack_8u32s_{AC4|C1|C3|C4}RSfs 
ippiFFTInv_CToC_32fc_C1[I]R 
ippiFFTInv_PackToR_32f_{AC4|C1|C3|C4}[I]R 
ippiFFTInv_PackToR_32s8u_{AC4|C1|C3|C4}RSfs 
ippiFilter_32f_{C1|C3|C4}R 
ippiFilter_32f_AC4R 
ippiFilter_64f_{C1|C3}R 
ippiFilter32f_{8s|8u|16s|16u|32s}_C{1|3|4}R 
ippiFilter32f_{8u|16s|16u}_AC4R 
ippiFilter32f_{8s|8u}16s_C{1|3|4}R 
ippiFilterBox_8u_{C1|C3}R 
ippiFilterBox_32f_{C1|C4|AC4}R 
ippiFilterColumn32f_{8u|16s|16u}_{C1|C3|C4|AC4}R 
ippiFilterColumn_32f_{C1|C3|C4|AC4}R 
ippiFilterGauss_32f_{C1|C3}R 
ippiFilterHipass_32f_{C1|C3|C4|AC4}R 
ippiFilterLaplace_32f_{C1|C3|C4|AC4}R 
ippiFilterLowpass_32f_{C1|C3|AC4}R 
ippiFilterMax_32f_{C1|C3|C4|AC4}R 
ippiFilterMedian_32f_C1R 
ippiFilterMin_32f_{C1|C3|C4|AC4}R 
ippiFilterRow_32f_{C1|C3|C4|AC4}R 
ippiFilterRow32f_{8u|16s|16u}_{C1|C3|C4|AC4}R 
ippiFilterSobelHoriz_32f_{C1|C3}R 
ippiFilterSobelVert_32f_{C1|C3}R 
ippiMean_32f_{C1|C3}R 
ippiMulC_32f_C1[I]R 
ippiMul_32f_{C1|C3|C4}[I]R 
ippiResizeSqrPixel_{32f|64f}_{C1|C3|C4|AC4}R 
ippiResizeSqrPixel_{32f|64f}_{P3|P4}R 
ippiSqrDistanceFull_Norm_16u32f_{AC4|C1|C3|C4}R 
ippiSqrDistanceFull_Norm_32f_{AC4|C1|C3|C4}R 
ippiSqrDistanceFull_Norm_8s32f_{AC4|C1|C3|C4}R 
ippiSqrDistanceFull_Norm_8u32f_{AC4|C1|C3|C4}R 
ippiSqrDistanceFull_Norm_8u_{AC4|C1|C3|C4}RSfs 
ippiSqrDistanceSame_Norm_16u32f_{AC4|C1|C3|C4}R 
ippiSqrDistanceSame_Norm_32f_{AC4|C1|C3|C4}R 
ippiSqrDistanceSame_Norm_8s32f_{AC4|C1|C3|C4}R 
ippiSqrDistanceSame_Norm_8u32f_{AC4|C1|C3|C4}R 
ippiSqrDistanceSame_Norm_8u_{AC4|C1|C3|C4}RSfs 
ippiSqrDistanceValid_Norm_16u32f_{AC4|C1|C3|C4}R 
ippiSqrDistanceValid_Norm_32f_{AC4|C1|C3|C4}R 
ippiSqrDistanceValid_Norm_8s32f_{AC4|C1|C3|C4}R 
ippiSqrDistanceValid_Norm_8u32f_{AC4|C1|C3|C4}R 
ippiSqrDistanceValid_Norm_8u_{AC4|C1|C3|C4}RSfs 
ippiSqrt_32f_C1R 
ippiSqrt_32f_C3IR 
ippiSubC_32f_C1[I]R 
ippiSub_32f_{C1|C3|C4}[I]R 
ippiSum_32f_C{1|3}R 
ippiTranspose_32f_C1R

 

Image Compression

ippiPCTFwd_JPEGXR_32f_C1IR 
ippiPCTFwd16x16_JPEGXR_32f_C1IR 
ippiPCTFwd8x16_JPEGXR_32f_C1IR 
ippiPCTFwd8x8_JPEGXR_32f_C1IR 
ippiPCTInv_JPEGXR_32f_C1IR_128 
ippiPCTInv16x16_JPEGXR_32f_C1IR 
ippiPCTInv8x16_JPEGXR_32f_C1IR 
ippiPCTInv8x8_JPEGXR_32f_C1IR

 

Those functions that have not been directly optimized for AVX (i.e., functions that do not appear in the table) have been compiled using the Intel Compiler "xG" switch (enable AVX optimization). Additional performance improvements are achieved by adherence to an AVX ABI (application binary interface) feature that inserts the special AVX "vzeroupper" instruction after any function with AVX code to eliminate any AVX to SSE transition penalties.

For those functions that are not directly optimized for AVX, the g9/e9 library utilizes optimizations from prior compatible SSE optimizations, such as those tuned for the p8/y8 libraries and preceding SIMD optimizations (e.g., SSE4.x, AES-NI and SSE2/3). Thus, functions not listed above will include the highest level of directly optimized code based on the AES-NI, SSE4.x, SSSE3, SSE3 and SSE2 SIMD instruction sets, wherever applicable.

For more information about the g9/e9 optimization layer and Intel AVX in the Intel IPP library, please refer to the Intel Integrated Performance Primitives for Windows* OS on Intel® 64 Architecture 'User's Guide'.

Review How to Compile for Intel® AVX for more information and check out the Intel Parallel Studio web site where you can learn more about the tools available to develop, debug, and tune your multi-threaded applications.

Optimization Notice in English

Пожалуйста, обратитесь к странице Уведомление об оптимизации для более подробной информации относительно производительности и оптимизации в программных продуктах компании Intel.