Our software is behaving differently on the preliminary i7-4770K system we have than on the Ivy Bridge ones that have been our platform, and we'd be grateful if anyone could help identify what the relevant differences are.
For decoding, this application of ours doesn't use a single contiguous bitstream buffer with encoded frames appended to it, as in the samples. Rather, to better manage encoded frame lifetime for our use case, we maintain a queue of separate buffers, one for each frame. Before calling DecodeFrameAsync() each time, we update our MFXBitstream's Data pointer to the buffer containing the frame to decode, and set the DataOffset (to 0) and DataLength fields appropriately (as well as the MFX_BITSTREAM_COMPLETE_FRAME flag).
This has been working great for us, but on the Haswell system there are ghostly distortions as if I-Frames were being missed, semi-random extreme pixellation, jerkiness and possibly some frames from a tiny bit backwards in time (though this may be an illusion). The video output is overall barely discernible.
If I switch to using a contiguous static bitstream buffer and letting the calls to DecodeFrameAsync() advance the DataOffset automatically, then the frames are all decoded flawlessly, even though everything else is exactly the same.
I noticed on calls to DecodeFrameAsync(), that often even when MFX_ERR_MORE_SURFACE is returned, that the DataOffset is nevertheless being advanced to the end of the frame. So I tried changing things when this occurs to reset the DataOffset pointer to the original position and call DecodeFrameAsync() again on the buffer -- and decoding output improved dramatically. There are still artifacts, but not as frequent or as severe.
I haven't yet been able to figure out anything else to do to further improve the operation back to its usual/previous flawless state, while maintaining our multi-buffer model. So any insight about how the Haswell decode stack handles its operations differently, that might have bearing on our situation, would be most appreciated!