Intel® INDE Media for Mobile Tutorials - Video Capturing for OpenGL* Applications

This tutorial explains how to use Intel® INDE Media for Mobile to add video capturing capability to OpenGL* applications.

Before getting started, download and install Intel INDE by visiting After installing Intel INDE, choose to download and install the Media for Mobile. For additional assistance visit the Intel INDE forum.

Go to the installation folder of Media for Mobile -> libs and copy two jar files (android-<version>.jar and domain-<version>.jar) to your application’s libs folder:


We are ready to go now, but first let me explain how GLCapture class works. GLCaptrue has its own rendering surface and an OpenGL context. All frames rendered to this surface can be encoded as video frames. Before the first use we need to setup some required parameters like video resolution, bit rate, path to video file, etc.

Now it’s time to add some code. Here goes a step-by-step instruction on how to configure a GLCapture object and start a video capturing process:

Create a new instance of GLCapture

GLCapture  mCapturer;
mCapturer = new GLCapture(new AndroidMediaObjectFactory());

Configure  video format

int VideoWidth = 720;
int VideoHeight = 1280;
int VideoFrameRate = 30;
int VideoIFrameInterval = 1;
int VideoBitRate = 1000;
String VideoCodec = "video/avc";

VideoFormatAndroid  videoFormat = new VideoFormatAndroid(VideoCodec, VideoWidth, VideoHeight);  


Destination file path


And the most important part: we need to configure the video capturing surface. For this we should call

mCapturer.setSurfaceSize(VideoWidth, VideoHeight)

This should be done only in a function with an active OpenGL context. This could be

onSurfaceChanged(GL10 gl, int width, int height)


onDrawFrame(GL10 gl)

But make sure to do this only once.

Video capturing process works inside the rendering loop. A simple way to capture frames would be the following. Render a scene to the display, then switch the OpenGL context to the video capturing surface by calling mCapturer.beginCaptureFrame() so that all the OpenGL calls affect the video surface. After that render the scene once again and call mCapturer.endCaptureFrame() to restore the default OpenGL context.


In some applications with complex graphics this method could be too expensive in terms of performance. For such cases we recommend using a frame buffer with an attached texture as a render target and render the final scene as a full screen texture to display and to the video capturing surface.


Sample application

You will find implementation of both methods in Intel® INDE Media for Mobile Samples source code.  For a demonstration run the Samples application and select Game Capturing. You will see it allows switching between the described methods. Here is a screenshot of this sample:

To keep the sample source code more clear we developed a few helper classes which implement all the needed functionality and provide easy to use interfaces.


A singleton wrapper for GLCapture class which takes care about initialization and provides just a few methods.

public void start(String videoPath)

Launches the capturing process, sets up the video format and destination file path.

public void stop()

Interrupts capturing process.

public void beginCaptureFrame()

This method checks if the video surface is configured. If it’s not the method tries to configure video surface first by calling setSurfaceSize() and then by calling beginCaptureFrame() of GLCapture.

public void endCaptureFrame()

Calls endCaptureFrame of GLCapture.

As you can see this class has four methods to work with GLCapture, you can use this class in your own applications to start experimenting with video capturing by adding just a few lines of code.


This class takes care about initialization of a frame buffer to be used as a render target.

public void create(int width, int height)

This method initializes a frame buffer and a texture, should be called before the first usage.

public void bind()

Makes frame buffer a render target.

public void unbind()

Switches back to the default render target.

public int getTexture()

Returns a handle to the texture attached to this frame buffer.

To use FrameBuffer object in the rendering loop:

  • First of all make it a render target by calling bind();
  • Then render scene to the frame buffer
  • Switch back to the default rendering target with unbind();
  • After that we have a texture with a scene rendered into it. Next we need to render this texture to display and to the video surface. And here comes another class which will do all the work for us: FullFrameTexture.


This class represents a full screen renderer. All the stuff we need to render a full screen texture (shaders, quad coordinates, orientation matrix) is created inside this class.

public void draw(int textureId)

This method renders a full scene quad to the current context using a texture with id passed as a parameter.

Finally, let me show you how to use all the mentioned classes inside the rendering loop.

// Bind frame buffer as a render target

// Render scene to frame buffer

// Restore rendering target

// Draw texture into display

// Make video surface a rendering target

// Draw texture into video surface

// Restore rendering target


Для получения подробной информации о возможностях оптимизации компилятора обратитесь к нашему Уведомлению об оптимизации.
Возможность комментирования русскоязычного контента была отключена. Узнать подробнее.