I've noticed that with the H.264 encoder Media Foundation sample (mostly implemented within the following two files: mf_venc_plg.cpp and mfx_mft_h264ve.cpp), the sample does not use video/GPU memory for either IOPATTERN_IN or IOPATTERN_OUT, even if the hardware implementation is being used. The m_uiInSurfacesMemType class variable starts off with a value of MFX_IOPATTERN_IN_SYSTEM_MEMORY, and even though the m_uiOutSurfacesMemType class variable is initially set to MFX_IOPATTERN_OUT_VIDEO_MEMORY, it is changed to MFX_IOPATTERN_OUT_SYSTEM_MEMORY on line 1177 using the MEM_IN_TO_OUT() macro. According to the Intel Media SDK Developer's Guide, video/GPU memory is best for the hardware implementation, although it doesn't explicitly state how much of a problem it is to use system memory with the hardware implementation (i.e., does it result in worse performance than using system memory with the software implementation?, etc).
As far as I can tell, the only way to cause the encoder to use video memory is to call the InitPlg() method from the custom IMFDeviceDXVA interface. The corresponding H.264 decoder Media Foundation plug-in makes use of this method, but InitPlg() is not called anywhere for the encoder, and possibly, the assumption is that it should be called by the user. I think this will set things up properly such that video memory will be used for input (supplied by the caller of ProcessInput()) and also for output (supplied by the encoder plug-in, I think). It's somewhat unclear to me how to make use of the IMFDeviceDXVA interface, however. The InitPlg() method takes as input an IMfxVideoSession object and and mfxVideoParam object. However, these objects have already been created, and really, all I want to do is cause the encoder to use hardware memory. What is the recommended approach for doing so?