Sample decoder rendering output broken

Sample decoder rendering output broken

stevz's picture

Hi there

this is my first post here after just discoveringthe Media SDK last week. For now I want to try and load and decode an AVC stream in my program. The Media SDK looks like what I need but I came across some difficulties which I hope to find some answer for now :)

First, I do not fully understand which platforms provide hardware acceleration with the Media SDK. I run on a desktop processor Core i7 870. When initializing a mfx sessionusing MFX_IMPL_HARDWARE the status is UNSUPPORTED. MFX_IMPL_SOFTWARE works. So, is the hardware acceleration only available on Intel GMA graphics chips but not on desktop computers featuring a dedicated graphics board such as Nvidia or ATI/AMD? Or do I need to install some fancy Intel drivers I have not yet discovered? Or is my i7 870 just too old to be hardware accelerated with the Media SDK?

Second, I've been able to write my own decoder based on the simple decoder sample in the SDK. During development I've only tested it based on return status values. When I got all the pieces together after wiping out the bugs I already discovered my decoder seemed to process and decode my input AVC stream without producing errors. I did not include rendering so far so I can't really tell.

But by simply counting the number of successful SyncOperation calls I found that the number of frames I get from my input AVC stream is much less than there should be. How could this happen if I fed all of the incoming stream to the mfx decoder NALU by NALU and did not get any error reports? I tried different AVC input files and calculated the number of frames to expect from their length and framerate. I also checked my calculation and the AVC validity by loading these files with After Effects. The files were produced using a Panasonic SD-66 and a Panasonic Z10000 camera with 1080i50 and 1080p50, respectively.

Next I tried the precompiled simple decoder which comes with the SDK installation in the samples/_bin/x64 path. I'm running on Windows 7 64 bit by the way.

I did not (yet) recompile the sample decoder and just used the original one. It can open my files and seems to decode the correct number of frames. So I guess there is still a bug in my own implementation which differs somewhat from the sample decoder. However, the rendering output of the sample decoder is broken. It just displays several chunks of various colors. But the text output looks ok. The format and resolution is recognized and the decoding does not produce any errors. The number of frames seems to fit so there is just the rendering broken.

Now what does this mean for me? Is the decoded data already broken? Or is the data still correct but just the rendering broken?

How should I proceed from here? I would like to verify somehow that the Media SDK can actually decode my AVC data and render the video correctly. After that I could dig through my version of a decoder and hunt the remaining bugs that make me loose frames.

Any helping hand is welcome :)

regards

AttachmentSize
Download Unbenannt.png679.14 KB
Download Unbenannt2.png3.73 KB
8 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.
stevz's picture

Digging a bit deeper about supported architectures it looks like I would need a Sandy Bridge Core i7 to get hardware implementation such as the Core i7 2xxx or 3xxx. Am I right with this assumption?

That would leave me with the most pressing problem of why the rendering is messed up with the sample "simple decoder". Could there be other hardware / software details I'm overlooking? Could the AVC stream use some kind of format that the Intel Media SDK does not support (but which it also does not report to be invalid?).

Thanks for any comment :)

reagrds

Nina Kurina (Intel)'s picture

Hi stevz,
Your platform does support hardware accelerated decode. SandyBridge is needed if you wish to accelerate encode additionally.
Just in case please check that libmfxhw64/32-i1.dll is present under C:\Program Files\Common Files\Intel\Media SDK\i1\1.5. It should have been installed by the graphics driver.

But I assume you have a descrete graphics card in you laptop, this may cause problems with enabling HW acceleration on integrated Intel graphics. Please try switching graphics to make Intel GFX active (check out the section "Switchable Graphics and Multiple Monitors" from mediasdk_man.pdf). Another relevant point is the command line. With some of the options, ex. -latency, sample app will raise the minimum MSDK API version it requires so the hardware library installed on your platform by graphics driver won't fit (it has API version 1.0) Can you provide the command line you use? You can also check the version which is passed to MFXInit. As for the broken rendering, can you please check the following items: 1) input video stream is an elementary video stream 2) try writing to file instead of rendering and check how output YUV file looks like 3) again, command line would be helpful
For you application problem with not all frames decoded, please check that after input stream has ended you enter a loop to extract buffered frames from decoder, with NULL bitstream as input. Sample_decode does that.
Please let me know how it goes.
Regards, Nina



stevz's picture

Hi Nina,

thanks for your reply. I can already provide some quick infos before I get home today and can investigate further.

As for my platform: I do not have a notebook but a desktop PC with no Intel GMA chipset. It's a i7 870 along with a Nvidia GTX 470. So I assume my i7 870 has no hardware acceleration in this case?

To address your last suggestion: I already do continue my loop after the input stream has ended providing a NULL pointer to the decoder as bitstream. I only abort the loop if I get the MFX_ERR_MORE_DATA result from the decoder with NULL bitstream input. So this should be ok.

Anyway, I would ignore my own implementation for now since it could contain more bugs which I have to hunt down myself to come up with more specific questions.

For now I would like to focus on getting the sample "simple decoder" sample to produce correct output for my input videos. Since this is relatively new ground for me you might forgive and answer my following add on questions:

"API version"
I have provided two attached file in my post above, the second file being a screen shot of the command line output of the decoder sample run. From what I read there the API version used seems to be 1.3.

"elementary video stream"
What exactly is an elementary video stream and how do I verify that? I feed a plain mts file right from my camcorder to the decoder sample which recognizes this as AVC stream (see my second attached image above). Wouldn't that mean that the Media SDK accepts this stream?

"how output YUV file looks like"
What format does the sample decoder produce? I have already seen that I can write out the decoded files instead of rendering them. But what kind of file type is generated by the decoder sample and how could I check if this looks ok?

Does the decoder sample write one file for each stream (this did not seem to work) or does it generate a YUV stream? Which application can I use to check the output file? I have the Adobe CS 5.5 available but did not work with uncompressed formats before. Is the YUV output created by the sample decoder a raw YUV 4:4:4 format?

"command line"
I just used the most obvious parameters to run my test with the binary sample decoder something like this:

sample_decoder.exe -i "blabla\test.mts" -r

I even tried writing the output file:

sample_decoder.exe -i "blabla\test.mts" -o "blabla\test" -r

But that just produced a single file called "test" in the corresponding directory. I would have expected a YUV image series and some file ending ... ?

Using the -hw options produces a failure which I looked up in the sample code to be ERR_UNSUPPORTED for using the HARDWARE option when initializing the sessing.

regards

Sorry for asking these basics but while having a fair amount of experience in C++ and 3D graphics I'm fairly new to the world of codecs and video formats. :)

Nina Kurina (Intel)'s picture
Best Reply

Ok, I see now. You have a multi-monitor set-up so you should still see the section of the manual I mentioned above.
To make hardware acceleration available you need to 1) make Intel GPU active in BIOS and plugged to a display 2) make Intel GPU primary - then the sample app should work with -hw option as is OR if Intel GPU is not primary, you need to modify MFXInit parameters to have impl = MFX_IMPL_HARDWARE_ANY in case -hw was specified.
Sample_decode.exe accepts only elementary video streams. Your files are obviously mpeg2 ts container file thus the artifacts after rendering. More details on elementary vs. container you can find in Media Developers Guide which can be found under \doc.
Sample_decode produces a YUV 420 file, this is a single file, all frames are stored one by one. Please check out \samples\sample_decode\readme-decode.rtf for details of sample usage. And again Media Dev Guide on YUV formats and viewers.
Media Dev Guide has plenty of useful info for a quick start on video and Media SDK.
Regards, Nina

stevz's picture

Thanks a lot Nina, yes, I forgot to mention that I have a second display connected.

Also thanks for pointing out the specific docs to me. It already helps me to have some keywords to look for and I think I have plenty to read this evening to get a better understanding of the various aspects such as containers and elemental streams.

Now I have to do my homework and get back later. But its good to know that Intel provides a helping hand at that speed. I really appreciate that :)

regards

Nina Kurina (Intel)'s picture

You are always welcome :-) Thanks for good words!
-Nina

stevz's picture

I did read more of the docs now and used Adobe Premiere to render my mts as essential h264 video stream file since I couldn't find a quick and dirty free demuxer that works with mts files. However,I can now feed my rerendered h264 stream to the Media SDK simple decoder sampleand get a correct rendering of the video.

So now I can start hunting bugs in my own implementation to get the same result before I dive into demuxing container streams myself.

Again, thanks for pointing out my flaw in trying to use the wrong input files. :)

Login to leave a comment.