Google Android Multimedia API Playback with Renderscript Effect

Legal Disclaimer


1. Introduction

2. Prerequisites 

3. Setting up
      3.1 Decoder
 3.2 RenderScript

4. Running Decoder

5. Applying RenderScript Effect

6. Color Conversion

7. Conclusion


This article will show you how to use Android Media Classes and Android RenderScript to apply effects on the decoded video. To keep it simple, the article deals with video playback only and audio is not covered. There are essentially two working threads: a Decoder Thread, and a RenderScript Thread.

The Decoder Thread does the following:
1.    Decodes the bit-stream and put decoded pictures into the decoded buffer. 
2.    Copies the newly decoded frame into a Renderscript Allocation.

The Renderscript Thread does the following:
1.    Post processes the decoded video frame to apply color space conversion and gray scale effect.
2.    Sends the output to another allocation which is associated with a SurfaceTexture.




Android API playback requires prior experience with Android App Development. The following articles are recommended and will provide further information on Media Decoder and RenderScript.

1. Android BasicMediaDecoder - The Android Sample BasicMediaDecoder sample -  

2. Android BasicRenderScript - The Android Sample BasicRenderScript sample -

3. Intel INDE Code Sample - Getting Started with RenderScript


Setting Up


Here is a code-snippet for setting up the decoder. Please refer to Android MediaExtractor Class for more information.

// Get the media file
String MediaPath = Environment.getExternalStorageDirectory().getPath() + "/Movies/test4.mp4";
// Create a Uri which parses the given encoded URI string
Uri uri = Uri.parse(MediaPath);

	// Set the Data Source for the MediaExtractor
	mMediaExtractor.setDataSource(this, uri, null);
	// Get The number of tracks
	int numTracks = mMediaExtractor.getTrackCount();
	for( int i=0; i < numTracks; i++ )
	for( int i=0; i < numTracks; i++ )
		// Get the type of the track
		MediaFormat format = mMediaExtractor.getTrackFormat(i);
		String mimeType = format.getString(MediaFormat.KEY_MIME);
			// For a Video Track, get Media Format
			// Frame width and height
			mVideoWidth = format.getInteger(MediaFormat.KEY_WIDTH);
	        	mVideoHeight = format.getInteger(MediaFormat.KEY_HEIGHT);
	        	// Buffer Width and Height
	        mBufferWidth = mVideoWidth;
	        // make sure that the buffer height is a multiple of 16
	        mBufferHeight = mVideoHeight + (mVideoHeight%16); 
	        // Initialize RenderScript See Section 3
	        // Creade a Decoder
			mDecoder = MediaCodec.createDecoderByType(mimeType);
			// Configure the decoder 
			mDecoder.configure(format,	// Media Format from the stream 
					null, 				// set surface to send decoded frames to buffers
					null,				// no Crypto
					0);					// flags
			if(mDecoder != null)											

				// Select this track
				// start the decoder
				// query input and output buffers
		        mInputBuffers = mDecoder.getInputBuffers();
		        mOutputBuffers = mDecoder.getOutputBuffers();
		        // allocate byte buffer to temporarily hold decoded frame, input to renderscript
		        // 3/2 because the decoder outputs NV12 which has 12bits per pixel
		        mLocalOutputBuffers = new byte[mBufferWidth*mBufferHeight*3/2];
			// Handle this condition
catch (IOException e)


Here is a code-snippet for setting up the RenderScript. Please refer to Android Renderscript for more information.

  // Create RenderScript Context
    mRS = RenderScript.create(this);

    // Load RenderScript
    mScript = new ScriptC_process(mRS);

    // Create Allocations for reading into and out of Script
    // Input Allocation
    // Create a new Pixel Element of type YUV
    Element elemYUV = Element.createPixel(mRS, Element.DataType.UNSIGNED_8, Element.DataKind.PIXEL_YUV);
    // Create a new (Type).Builder object of type elemYUV
    Type.Builder TypeYUV = new Type.Builder(mRS, elemYUV);
    // Set YUV format to NV21. The Decoder Outputs NV21 Surfaces

    // Create an Allocation
    mAllocationYUV = Allocation.createTyped(mRS,TypeYUV.setX(mVideoWidth).setY(mVideoHeight).create(),
    // Allocation Type Allocation.MipmapControl.MIPMAP_NONE, // No MIPMAP
Allocation.USAGE_SCRIPT);// Allocation will be used by a script

    // Output Allocation
    // Create a new Pixel Element of type RGBA
    Element elemOUT = Element.createPixel(mRS, Element.DataType.UNSIGNED_8, Element.DataKind.PIXEL_RGBA);

     // Create a new (Type).Builder object of type elemOUT
     Type.Builder TypeOUT = new Type.Builder(mRS, elemOUT);

     // create an Allocation
     mAllocationOUT = Allocation.createTyped(mRS,
     TypeOUT.setX(mVideoWidth).setY(mVideoHeight).create(),   // Allocation Type
Allocation.MipmapControl.MIPMAP_NONE,    // No MIPMAP
Allocation.USAGE_SCRIPT|                 // will be used by a script
Allocation.USAGE_IO_OUTPUT);             // will be used as a SurfaceTexture producer

    // Associate the Surface with output allocation
    // Get the Surface Texture
    SurfaceTexture surfaceTexture = mPlaybackView.getSurfaceTexture();

    if(surfaceTexture != null)
    // Create a new surface
    Surface surface = new Surface(surfaceTexture);
    if(surface != null)
   // set allocation surface
    // Set the Input Allocation


Running Decoder

Here is a code-snippet for copying decoded buffers to the RenderScript allocation, which is a part of the Decode Thread. Please refer to android MediaCodec Class for more information on running the thread.

Prior to calling codec.releaseOutputBuffer(outputBufferIndex, ...), do the following:

1. Get the decoded buffer
2. Copy it to the allocation
3. Create a new thread to apply the RenderScript effect

// decoded buffer size
int bufsize = mOutputBuffers[index].capacity();

// get the decode buffer
mOutputBuffers[index].get(mLocalOutputBuffers, 0, bufsize);

// copy to input Allocation

// rewind the ByteBuffer so that the pointer can be back to 0

// start a new thread for applying RenderScript effect
new ProcessData().execute();

// release outputbuffer back to the decoder
mDecoder.releaseOutputBuffer(index, render); 


Applying RenderScript Effect

Here is a code-snippet for the RenderScript Thread. Please refer to AsyncTask for more information on the UI Thread.

private class ProcessData extends AsyncTask<byte[], Void, Boolean>
	protected Boolean doInBackground(byte[]... args)
        	// Apply the Renderscript YUV_to_RGB + greyscale effect
        	mScript.forEach_yuvToRgb_greyscale(mAllocationYUV, mAllocationOUT);
        	// Send the output allocation to surfacetexture

            return true;


Color Conversion

Here is a simple script function to convert from YUV to RGB.
rs_allocation gIn;

void yuvToRgb_greyscale(const uchar *v_in, uchar4 *v_out, uint32_t x, uint32_t y) {
    uchar yp = rsGetElementAtYuv_uchar_Y(gIn , x, y) & 0xFF;
    uchar4 res4;
    res4.r = yp;
    res4.g = yp;
    res4.b = yp;
    res4.a = 0xFF;

    *v_out = res4;



The Android Multimedia Framework provides a high level API for utilizing the hardware multimedia capability. Video processing is a very computationally intensive workload, and Renderscript provides a framework to parallelize this workload.  The above article describes a method to allow Android Multimedia Framework to inter-operate with Renderscript in order to apply video post processing effects on the decoded video.

You can use the forum to post and participate in the discussion related to this article . 

For more complete information about compiler optimizations, see our Optimization Notice.