Hi all, I was looking to the documentation of the UMC::H264Decoder class (IPP version 7.0.7) and I found that there's a new function that I never notice in the old version
The H264 decoder it's sometime very heavyespeciallyon FULL HD or higher streams. So I found that function and I would use it for make decoding process waste less CPU time,especiallywhen I have to decode a lot of streams at the same time (like a lot of IP camera): this way I can have a more scalable system.
I think that function was coded for that reason. The problem is that I have done a test on my PC (a quite old Intel Q6600) and I found no difference using 0 or 7 as decoding speed: I notice only poor quality (as I expected) with speed of 7, but the CPU consumed by the decoding process is the same in 0 or 7. (CPU decoding load was for example 10% in all cases)
Did you do some test with that function at different speed? could you notice some difference? what king of CPU?