When would the L2_Data_Read_Miss_Cache_Fill happen?

When would the L2_Data_Read_Miss_Cache_Fill happen?

Hi, guys

    I am trying to run a mpi program on the MIC using 240 threads, and I use the Vtune to analyze my program and find the  L2_DATA_READ_MISS_CACHE_FILL/ L2_DATA_READ_MISS_MEM_FILL is too high(about 88:1).

      I guess that too many L2_DATA_READ_MISS_CACHE_FILL is the reason of my program's poor performance. So I want to know that ,in what kind of situation, one core needs to get data from another core's L2 cache instead of from the memory ?   Since remote cache accesses have as high a latency as memory accesses, they should be avoided if possible.

    Thank you

     Hu

2 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

If you have excessive L2_DATA_READ_MISS_CACHE_FILL without significant values of L2_DATA_WRITE_MISS_CACHE_FILL it would seem that you are sharing more data among cores than would be expected under MPI.

An obvious question is how are you getting to 240 threads?  Are you trying to run 240 ranks of 1 thread each, and to see what the symptoms of this are on the VTune side, then asking us to reverse engineer from the VTune symptoms?  Are you saying that the symptom appears only as you approach 240 threads?

It's usually necessary to find a reasonable number of MPI ranks and threads per core before getting interested in symptoms.  6 ranks of 30 threads have been effective in some of my application testing, but a suitably large number of threads per core typically requires setting ulimit -s in each rank, which you didn't mention.  

I haven't heard of many cases where performance could scale beyond one MPI rank per core, even in fairly simple benchmarks, due in part to obvious limitations of memory availability and cache size.

Are you observing the requirement to leave the last core open for mpss and VTune?  If you have 61 cores, you would use at most 60 for your application.

Leave a Comment

Please sign in to add a comment. Not a member? Join today