When reviewing marketing and other literature Intel consistently is referring to QSV and their H.264 encoder as having "lower latency or low latency. While I understand that itdoes have lower latency in comparison to a software encoder, what are Intel's actual latency results when compared to say software only, Sandy Bridge, and now Ivy Bridge? How is Intel defining low latency? is it 2ms? 10ms? .5ms?
Are there published benchmarks or case studies that take different sized video streams, say 480p 30fps, 720p 30fps, 1080p 30fps, etc. and measure the encoding latency on each different system using a live 1080p HD or 480p stream? What would be nice is a general matrix like the below with a latency number filled in to the blanks.
Core2 software only
Core i7 Sandy Bridge 2600k
Core i7 Ivy Bridge xxxxk (relevant to SNB)
320x240 at 30fps
640x480 at 30fps
480p at 30fps
720p at 30fps
1080p at 30fps
With any benchmark there are standard configurations used for memory/HD/etc so I know these are factors in measuring. Im not looking for a discussion on that topic :) Latency should be the time from source video in to encoded video out