Intrinsics for the Short Vector Random Number Generator Library


The Short Vector Random Number Generator (SVRNG) library provides intrinsics for the IA-32 and Intel® 64 architectures running on supported operating systems. The SVRNG library partially covers both standard C++ (as referenced here: and the random number generation functionality of the Intel® Math Kernel Library (Intel® MKL). The SVRNG library allows users to produce random numbers using a combination of engines and distributions. "Engines" are basic generators which produce uniformly distributed 32-bit or 64-bit unsigned integer numbers. "Distributions" transform the sequences of numbers generated by an engine into sequences of numbers with specific random variable distributions, such as uniform, normal, binomial and others. The distributions support single- or double-precision floating point and 32-bit signed integer outputs.

Both scalar and vector implementations are available for SVRNG generation functions. Scalar versions return native C++ data types: float, double, and both 32- and 64-bit integers. Vector versions produce packed results using SIMD-vector registers via corresponding data types as outlined in the "Data types and calling conventions" section below. Scalar versions called in loops can be vectorized by the compiler.

Unlike simple random number generators such as rand(), SVRNG engines and distributions require initialization routines which allocate memory and pre-compute constants required for fast vector generation. Finalization routines are provided to deallocate memory. Some engines support "skip-ahead" and "leap-frog" techniques for use in parallel computing environment. The "Parallel Computation Support" section discusses how these are used to obtain a random number sequence in parallel that is identical to the random number sequence that is generated in the sequential case. Error handling in SVRNG is done via status set and get functions. Additionally NULL pointers are returned on errors when possible.

SVRNG SIMD-vector functions and corresponding vectorized scalar calls are highly optimized for the following instructions sets:

  • Intel® Streaming SIMD Extensions 2 (Intel® SSE2) (default)
  • Intel® Advanced Vector Extensions 2 (Intel® AVX2)
  • Intel® Initial Many Integrated Core Instructions (Intel® IMIC Instructions)
  • Intel® Advanced Vector Extensions 512 (Intel® AVX-512) Instructions (on Intel® Many Integrated Core architectures and elsewhere)

Further Reference

The following documents are referenced in this section to provide further detail:

  • Developer Reference for Intel® Math Kernel Library 11.3 - C:
  • Notes for Intel® MKL Vector Statistics:
  • _vectorcall and __regcall demystified:

For more complete information about compiler optimizations, see our Optimization Notice.