I am trying to measure L1 cache misses for my program. I turned on the L1 data cache miss rate for sampling. When checked, noticed that it has turned on the L2D_REPL event. The L1 data cache miss rate is defined as
L1 data cache rate = L1D_REPL/INST_RETIRED.ANY
L1D_REPL : This event counts the number of lines brought into the L1 data cache
I was hoping that for L1 data cache misses, it will be
L1 data cache rate = L1D_CACHE_LD.I_STATE/INST_RETIRED.ANY
L1D_CACHE_LD.I_STATE : Counts how many times requests miss the cache
Then I tried to measure L1D_CACHE_LD.I_STATE and L1D_REPL events. Though they are not exactly same, those are not far apart either.
So I am trying to understand why L1 data cache miss rate is considering L1D_REPL instead of L1D_CACHE_LD.I_STATE.
On the couterpart of L2 cache miss rate, it seems to be considering L2_LINES_IN.SELF.ANY/INST_RETIRED.ANY.
L2_LINES_IN.SELF.ANY : This event counts the number of cache lines allocated in the L2 cache.
L2 as well as L1 cache miss rate is based on cache lines allocated. Why isnt it based on cache misses such as L1D_CACHE_LD.I_STATE. Also, for L2 cache, I dont see something similar to L1D_CACHE_LD.I_STATE.
Any insight will help.