Hello to all the members,
I implemented Intel Media SDK in my own application wich only decodes frames and save them in single files and testet on the new Intel Ultrabook.
I measured performance to see how Intel SDK will improve decoding capabilities.
So I was wondering about the results.
I tested both Software and Hardware implementation with both system and D3D memory allocation.
Why I was wondering about the results? The software implementation with system memory usage is as fast as possible, much faster than the decoding routines I used before.
But If I use hardware implementation the performance will slow down as double as software implementation and also much slower than my before used routine.
I tested hardware implementation with both system and d3d memory allocation. An with system memory allocation the decoding is faster than with d3d memory.
At last I used software implementation with d3d memory witch is slower than hardware implementation with system memory.
For d3d memory allocation I adpot the d3dallocator from SDK sample_decode project.
To eliminate some application hints from my own appplication I decided to measure decoding also with SDK sample_decode project witch ended in the same results as described above.
For measurment I used GPA trace to only measure the whole decoding process (Decode_FrameAsync) and frame reading process separatly.
Did I go fail with my consideration that hardware implementation decoding is faster than software implementation decoding?
Will hardware implementation only improve applications that view the results on graphic devices?
So I hope someone can explain me the underlaying implementations a little bit more.