Help with camera preview in Android media sample

Hello Intel INDE team,

I am building an app with live streaming and am using code from the Android sample CameraStreamerActivity. Everything works great in landscape mode and I get a full screen preview. However, when I try to run this in portrait mode, I get a square preview which does not fill the screen. I've tried setting different preview sizes and even removing setPreviewSize, but I still cannot fill the whole screen.

This is a sample of how my preview looks:


Thank you.



Failed to open va_openDriver() in crontab

No problem hwaccel encoding in console
But, Failed to open va_openDriver() in crontab

libva info: VA-API version 0.35.0
libva info: va_getDriverName() returns 0
libva info: User requested driver 'iHD'
libva info: Trying to open /opt/intel/mediasdk/lib64/
libva info: Found init function __vaDriverInit_0_32
libva info: va_openDriver() returns 0

Question about the "ECCN" for the Intel Media SDK

Dear Experts,

Please let me know ECCN(Export Control Classification Number) for
Intel Media SDK as we are required it for the export countrol of
our software including it.

Also I'd appreciate if someone could tell CCATS(Commodity Classification Automated Tracking System) if it is applicable.

Best Regards,



Unexpected MFX_ERR_NOT_ENOUGH_BUFFER error during h.264 encoding


We use Media SDK to do software h.264 video encoding.

We switched from Media SDK 2012 R2 to Media SDK 2014 R2. Long run tests revealed a problem: after a period (several hours, not constant), encoding stops working, no more video frames are encoded.
After investigations, we found that the function MFXVideoENCODE::EncodeFrameAsync(...) returns -5, i.e. MFX_ERR_NOT_ENOUGH_BUFFER

The documentation stipulates that this error happens when: The bitstream buffer size is insufficient.

A hardware QuickSync question

I would like to ask if you are going to change the way that hardware transcoding works right now.

From QuickSync version 1 (inside SandyBridge) till now, the transcoding process uses a lot of hardware resources of the iGPU (EUs) - besides ASIC - and we could say that it's not a pure hardware (ASIC only) implementation.

On the other hand, Nvidia and AMD use ASIC only transcoding.

For example NVenc from Nvidia and VCE from AMD, use only the ASIC inside the GPUs and very few GPU resources -> GPU load < 5%

Copy decoded video frame from video surface into system memory


I am using simple_decode_d3d example from MediaSDK tutorials for decoding 4K H.264 video.

Usage Example: simple_decode_d3d -i Test4KVideo.h264

The sample_decode_d3d gives 300fps (without saving/copying the output).

However, when I copy the decoded frame in NV12 format (size of the raw frame ~12MB) to local buffer using memcpy, the fps drops to <10fps.

libmfxsw64.dll VPP erroneous artifacts


Using the sample 'simple_6_decode_vpp_postproc' from the latest tutorials by Jeffrey Mcallister, I am seeing noisy/buggy video being output.

I have not modified the tutorial example, I am using it as-delivered.

I have isolated the problem.  Using hardware  mode (-hw) does not show the problem.

Also, using the tutorial simple_2_decode does not exhibit the problem.

I am using the following .BAT script to exhibit the problem. This tutorial sample resizes to 50% by 50% so use NV12 160x90 with your YUV viewer.


Processors that support QuickSync

I believe that there were a few Sandy Bridge processors that did not do QuickSync, and that the only way to be sure if it was supported was to look up the processor in the Intel Ark and look for QuickSync in its feature list.

Can we assume that ALL Ivy Bridge, Haswell and Broadwell processors can do Quicksync decoding/encoding, or are there still some annoying exceptions that mean we still always have to consult the ark website to find out ?



Subscribe to Media