How to get output encoded Frame from Jpeg Encoder for every input frame (no buffering)

How to get output encoded Frame from Jpeg Encoder for every input frame (no buffering)

Hi,

I am using the sample Jpeg Encoder from sample_encode application of Media SDK.

By default, this encoder is providing the first encoded output frame after consuming 8 input frames. So I changed the AsyncDepth value to 1 (default setting was 4). Now it generates the output frame after consuming 2 input frames but i am looking for out from first frame itself.

After debugging the code further, I found that the AsyncDepth value actually derives the size of encoder task pool (=AsyncDepth*2) which cant be set to 0.

So is there any way through which i can generate the output encoded frame from first input frame itself??

~

Thanks.

~Ramashankar
4 post / 0 nuovi
Ultimo contenuto
Per informazioni complete sulle ottimizzazioni del compilatore, consultare l'Avviso sull'ottimizzazione

No response yet. Is there something missing or unclarified in my query?

OK, let me put in other and simple form: 

Is there any mechanism/API/lib in IMSDK through which I can convert any YUV420 frame into a Jpeg frame?

 

~Ramashankar
Ritratto di Tony Pabon (Intel)

Hi,

Sorry for the slow response.

The sample code that is using "AsyncDepth*2" and doing a check for even number of tasks was only for creating multi-output streams like MVC.  If the output is JPEG still images, there is no need for this.

You may use ASyncDepth of 1 without issue, but you will need to modify the sample code managing the tasks.  If you remove the " * 2 " from the calling function and remove the check for even value in CEncTackPool::Init() function (in pipeline_encode.cpp).

The limit is really a "sample code" limitation, and not a limitation of the Intel Media SDK API itself.

We'll look at the possibility of updating our sample code in the future.

-Tony

Thanx Tony. I will check this approach.

Meanwhile a further doubt :

If I set the AysncDepth to 1, and remove the check of "*2", then probably it will create the Encoder task pool with size 1. So I think Jpeg encoder will start giving the first encoded output frame from 2nd input frame onward (i.e it will still consume 1 frame as buffer). Isn't it so? What do you think?

I will try to debug and confirm it at my end also.

~Ramashankar

Accedere per lasciare un commento.