Intel® Advisor

Strange L1 and L2 bandwidth with Advisor 2017 Update 1 (roofline)


I have installed Advisor 2017 update 1 and I get very numbers for L1 and L2 bandwidth. L2 gives 1.5e+5 GB/s and L1 is so huge I can't see it on the plot. I think that this strange numbers were not there in the beta release of the roofline model. This behavior is on both my laptop and my desktop.

Could you please confirm this is a bug? Is there any workaround?

Best regards,


search-dir symbols on linux

For command line

-search-dir  sym:

-search-dir bin:

On linux, since the symbolic info is in the object files, sym and bin searches are the same path, correct?  I would guess sym is only useful on Windows where the PDB sybolic info files are separate from the object files, correct?

or do I misunderstand:  is 'bin' for the executable and 'sym' for the object files?  Or does 'bin' mean executable, shared objects, .o object files?

I guess the question is "on linux, what does bin include and what does sym include"?

Running Intel® Parallel Studio XE Analysis Tools on Clusters with Slurm* / srun

Since HPC applications target high performance, users are interested in analyzing the runtime performance of such applications. In order to get a representative picture of that performance / behavior, it can be important to gather analysis data at the same scale as regular production runs. Doing so however, would imply that shared memory- focused analysis types would be done on each individual node of the run in parallel. This might not be in the user’s best interest, especially since the behavior of a well-balanced MPI application should be very similar across all nodes.

  • Linux*
  • Servidor
  • C/C++
  • Fortran
  • Avanzado
  • Intermedio
  • Amplificador Intel® VTune™
  • Intel® Advisor
  • Intel® Inspector
  • Mensaje pasa a interfaz (MPI)
  • IMPI
  • Slurm
  • srun
  • cluster
  • analysis
  • Big data (datos a gran escala)
  • Computación nube
  • Computación con clústeres
  • Centros de datos
  • Depuración
  • Herramientas de desarrollo
  • Empresa
  • Optimización
  • Computación en paralelo
  • Análisis de plataforma
  • Subprocesos
  • Vectorización
  • Command line MPI workflow

    Using the command line collection, I'd like to collect all steps survey, trip counts, memory etc from the command line.  Seems easy enough, run a series of collections each gather different experiments.  The question is, do I re-use the same -project-dir for every collection OR do I create a different project dir for each collection?

    IF I use a separate project dir for each collection, do I use -import-dir to gather ALL the collected data into 1 combined project dir?

    I'm trying to mimic the GUI workflow from the command line.



    what does -import-dir do?



    In the documentation for the MPI workflow the -import directory is shown similar to this:

    advixe-cl --collect survey -trace-mpi --project-dir ./adviproj --search-dir all:r=/lustre/ttscratch1/green/collectorbug -- ./cpi


    advixe-cl -project-dir ./new-adviproj -import-dir ./adviproj -mpi-rank 3 -search-dir all:r=/lustre/ttscratch1/green/collectorbug


    command line -no-auto-finalize


    A couple of questions on finalization.

    We're working on Knights Landing processor.  Obviously finalization on KNL is slower than on Xeon, taking somewhere between 5 to 15 minutes for a moderately large code.  We started using -no-auto-finalize on the KNL collection and finalizing in the GUI ( open an empty project, set up the Properties with binary, source search, bin search.  Then Open Results - loading the results finalizes in the GUI and all is good. )

    Couple of questions: 

    整理您的数据和代码: 数据和布局 - 第 2 部分

    Apply the concepts of parallelism and distributed memory computing to your code to improve software performance. This paper expands on concepts discussed in Part 1, to consider parallelism, both vectorization (single instruction multiple data SIMD) as well as shared memory parallelism (threading), and distributed memory computing.
  • Estudiantes
  • Código moderno
  • Servidor
  • Windows*
  • C/C++
  • Fortran
  • Intermedio
  • Intel® Advisor
  • Intel® Cilk™ Plus
  • Módulos Intel® de subprocesamiento
  • Intel® Advanced Vector Extensions
  • OpenMP*
  • Arquitectura Intel® para muchos núcleos integrados
  • Optimización
  • Computación en paralelo
  • Subprocesos
  • Vectorización
  • Feature Request: collector should recognize Cray MPI with ALPS_APP_PE env var


    Cray MPI on their XC systems sets env var ALPS_APP_PE to the rank, unique for each rank, from 0 to N-1 for N ranks.  They do not use the same env vars as MPICH, Intel MPI or OpenMPI to pass rank information down to applications.

    advixe-cl run under MPI needs to open a results dir for each rank.  I believe it is looking at MPICH, iMPI and OpenMPI env vars to find the rank number to use for the results directories.  I am pretty sure it is not looking for Cray's env var ALPS_APP_PE.  What I'm seeing is if I launch a Cray MPI job thusly:

    Suscribirse a Intel® Advisor