working on the optimization of some algorithms.
work I found some odd behavior.
1) Im working in Kernel mode using
some Windows Realtime Extension
2) I disabled interrupts and there are no context switches
3) I write back and invalidate the
cache each time I run the algorithm
4) I am using an intel core architecture
5) The Algorithm mainly reads, modifies and writes back memory in a loop
6) The memory area I use is not being paged
Now look at the image below. What I don't understand is the behavior at the beginning. Why are there these peaks in execution time that settle after a few executions. Any idea?
Thanks in advance!