Captured Data of NV12 Writing into DXVA Encode Buffer

Captured Data of NV12 Writing into DXVA Encode Buffer

Hi there,

We're designing a HDMI Capture with 1080i stream.

 It's considered that the capture outputs the RAW of NV12 to the MSDK encode frame buffer(DXVA) by the way of DMA though PCIE.

I would like to ask, whether the MSDK hardware impl support that kind of input behavior? (seems more efficient without colorspace convert.)

Any advices appreciated.

Regards,

 Joyah

- "What hurts more, the pain of hard work, or the pain of regret?"
publicaciones de 7 / 0 nuevos
Último envío
Para obtener más información sobre las optimizaciones del compilador, consulte el aviso sobre la optimización.

Hi Joyah,

Your solution is good as long as "DXVA buffer" is a Direct3D9 or a Direct3D11 (DX11.1 on Windows 8) surface - that's what Media SDK Encoder supports at input.

Please also note that you have to provide the D3D device used to create D3D buffers and an external frame allocator to Media SDK so that HW accelerated encoding can be performed on those buffers. I recommend to check out sections "Working with Microsoft* DirectX* Applications" and "Memory Allocation and External Allocators" of mediasdk_man.pdf for more details.

Regards,
Nina

Joyah,

You can also use system memory to do this, and the programming model is simpler to get going than Direct3D,
but you will pay a performance penalty.
In my experience encoding 1080P from system memory was about 78% as fast as using Direct3D surfaces.

So, you have a couple options.

Cameron

Quote:

Nina Kurina (Intel) wrote:

Hi Joyah,

Your solution is good as long as "DXVA buffer" is a Direct3D9 or a Direct3D11 (DX11.1 on Windows 8) surface - that's what Media SDK Encoder supports at input.

Please also note that you have to provide the D3D device used to create D3D buffers and an external frame allocator to Media SDK so that HW accelerated encoding can be performed on those buffers. I recommend to check out sections "Working with Microsoft* DirectX* Applications" and "Memory Allocation and External Allocators" of mediasdk_man.pdf for more details.

Regards,
Nina

Your reply helps a lot, thank you!

- "What hurts more, the pain of hard work, or the pain of regret?"

Quote:

camkego wrote:

Joyah,

You can also use system memory to do this, and the programming model is simpler to get going than Direct3D,
but you will pay a performance penalty.
In my experience encoding 1080P from system memory was about 78% as fast as using Direct3D surfaces.

So, you have a couple options.

Cameron


Thanks for your advice!

- "What hurts more, the pain of hard work, or the pain of regret?"

If native memory for the capture device output is video memory it would be better to use D3D surfaces to avoid costly copy from video to system memory.

Regards,
Nina

Hello,Joyah and Nina Kurina (Intel). I'm also designing a HDMI Capture Card. And now I can capture RAW of NV12 to the system memory.But to increase the perfomance,I'd like to capture to the vedio memory.I'm working on Intel Core i5, and my minidriver is based on AVStream.

To do this,The WDK documents mention that "Obtain the adapter GUID from the vendor-supplied graphics miniport driver. The DXGK_INTERFACESPECIFICDATA structure contains the adapter GUID to return in the property request. This structure is generated by the DirectX graphics kernel (DXGK) subsystem and is passed to the miniport driver when the adapter is initialized."

But I'm still puzzeled about the specific operations to get that display adapter’s GUID.Can you explain the specific operations to me?

Thanks very much!

Deje un comentario

Por favor inicie sesión para agregar un comentario. ¿No es socio? Únase ya