Adding and Retrieving Closed-caption messages in AVC and MPEG-2 Streams

Legal Disclaimer

In this article, we are going to show how to add and retrieve closed-caption data to the AVC and MPEG2 streams through Intel® Media Server Studio and Intel® INDE. In the first part, we will illustrate using code examples how to add CC messages to the AVC and MPEG2 streams. In the second part, we will show how to retrieve these messages using the decoder.

To test the closed-captioning in AVC/MPEG2 using the following code snippets, we recommend using the tutorials instead of the samples. (You can download the tutorials from the TUTORIALS tab here). The tutorials are much easier to understand and have more comments to help you. For encoding, simple_6_encode_vmem_lowlatency tutorial, and for decoding, simple_2_decode tutorial.  In the code snippets below, we point you to the location where this code should be added for encode and decode.

Let's get started!

Adding Closed-caption Messages to the AVC and MPEG2 Encode Stream

In this section, we will see how to add closed-caption data to AVC and MPEG2 streams during the encoding stage. Below are the high-level how-to steps, and we will follow that up with a code snippet.

Step 1: Create the SEI/user_data payload with the appropriate header and message fields. For details on the AVC and MPEG2 payload structures, refer to this document. You can also refer to the following -  for AVC stream (Section 6.4.2 in the document), and for MPEG2 stream (Section 6.2.3 in the document).

Step 2a:Populate the mfxPayload structure with this payload and header information.

Step 2b: Pass the mfxPayload structure to mfxEncodeCtrl structure 

Step 3:Pass this structure to the EncodeFrameAsync function (first parameter), and you're done!

Code example to add CC messages to AVC, in the main encoding loop:

#define MESSAGE_SIZE 20
typedef struct
				 unsigned char countryCode;
				 unsigned char countryCodeExtension[2];
				 unsigned char user_identifier[4];
				 unsigned char type_code;
				 unsigned char payloadBytes[MESSAGE_SIZE];
				 unsigned char marker_bits;
			} userdata_reg_t35;
			/*** STEP 1: START: SEI payload: refer to <link to AVC> for the format */
			/* Populating the header for the SEI payload. In most cases, these assignments will not change */
			userdata_reg_t35 m_userSEIData;
			m_userSEIData.countryCode = 0xB5;
			m_userSEIData.countryCodeExtension[0] = 0x31;
			m_userSEIData.countryCodeExtension[1] = 0x00;
			m_userSEIData.user_identifier[0] = 0x34;
			m_userSEIData.user_identifier[1] = 0x39;
			m_userSEIData.user_identifier[2] = 0x41;
			m_userSEIData.user_identifier[3] = 0x47;
			m_userSEIData.type_code = 0x03;
			m_userSEIData.marker_bits = 0xFF;
			/* Populate the actual message. In this example it is "Frame: <frameNum>" */
			sprintf((char*)m_userSEIData.payloadBytes, "%s%d", "Frame: ", nFrame);
			/*** STEP 1: END: SEI payload: refer to <link to AVC> for the format */

			/*** STEP 2a: START: Fill mfxPayload structure with SEI Payload */
			mfxU8 m_seiData[100]; // Arbitrary size
			mfxPayload m_mySEIPayload;
			memset(&m_mySEIPayload, 0, sizeof(m_mySEIPayload));
			m_mySEIPayload.BufSize = sizeof(userdata_reg_t35) + 2; // 2 bytes for header
			m_mySEIPayload.NumBit = m_mySEIPayload.BufSize * 8;
			m_mySEIPayload.Data = m_seiData;

			// Insert SEI header and SEI msg into data buffer
			m_seiData[0] = (mfxU8)m_mySEIPayload.Type; // SEI type
			m_seiData[1] = (mfxU8)(m_mySEIPayload.BufSize-2); // Size of following msg
			memcpy(m_seiData+2, &m_userSEIData, sizeof(userdata_reg_t35));
			mfxPayload* m_payloads[1]; 
			m_payloads[0] = &m_mySEIPayload;
			/*** STEP 2a: END: Fill mfxPayload structure with SEI Payload */

			/*** STEP 2b: START: Encode control structure initialization */
			mfxEncodeCtrl m_encodeCtrl;
			memset(&m_encodeCtrl, 0, sizeof(m_encodeCtrl));
			m_encodeCtrl.Payload = (mfxPayload**)&m_payloads[0];
			m_encodeCtrl.NumPayload = 1;
			/*** STEP 2b: END: Encode control structure initialization */
			nEncSurfIdx = Get Free Surface;
			Sucface Lock;
			pmfxSurfaces[nEncSurfIdx] = Load Raw Frame;
			Surface Unlock;
			/*** STEP 3: Encode frame: Pass mfxEncodeCtrl pointer */
			sts = mfxENC.EncodeFrameAsync(&m_encodeCtrl, pmfxSurfaces[nEncSurfIdx], &mfxBS, &syncp);


Code example to add CC messages to MPEG2, in the main encoding loop:

Adding payloads to MPEG2 is similar to the AVC example above. The difference comes from the format used for the MPEG2 user_start_code as compared to SEI message. Below, we illustrate how to populate an MPEG2 payload. This is in accordance to the ATSC standard for MPEG2 Video.

#define USER_START_CODE 0x1B2
#define MESSAGE_SIZE 20
typedef struct{
				mfxU8 atsc_identifier[4];
				mfxU8 type_code;
				/* For type 0x03, some additional bits before the data field starts - refer to <link to cc_data()> */
				mfxU8 additional_bits[2];
				unsigned char cc_data[MESSAGE_SIZE];

			/** STEP 1 */
			ATSC_user_data atsc;
			atsc.atsc_identifier[0] = (mfxU8)0x34;
			atsc.atsc_identifier[1] = (mfxU8)0x39;
			atsc.atsc_identifier[2] = (mfxU8)0x41;
			atsc.atsc_identifier[3] = (mfxU8)0x47;
			atsc.type_code = 0x03;					
			atsc.additional_bits[0] = (mfxU8)0x12;	//00010010;		//cc_count, addnl data, cc data, em data processed
			atsc.additional_bits[1] = (mfxU8)0xff;		//reserved			
			sprintf((char*)atsc.cc_data, "%s%d", "Frame: ", nFrame);
			/** STEP 2a */
			mfxU8 m_seiData[100]; // Arbitrary size
			mfxPayload m_Payload;
			memset(&m_Payload, 0, sizeof(m_Payload));
			m_Payload.Type = USER_START_CODE;
			m_Payload.BufSize = MESSAGE_SIZE + 7;
			m_Payload.NumBit = m_Payload.BufSize * 8;
			m_Payload.Data = m_seiData;
			memcpy(m_seiData, &atsc, 7);
			memcpy(m_seiData+7, &atsc.cc_data, MESSAGE_SIZE);
			mfxPayload* m_payloads[1]; 
			m_payloads[0] = &m_Payload;

			/** STEP 2b */
			mfxEncodeCtrl m_encodeCtrl;
			memset(&m_encodeCtrl, 0, sizeof(m_encodeCtrl));
			m_encodeCtrl.Payload = (mfxPayload**)&m_payloads[0];
			m_encodeCtrl.NumPayload = 1;
			nEncSurfIdx = Get Free Surface;
			Sucface Lock;
			pmfxSurfaces[nEncSurfIdx] = Load Raw Frame;
			Surface Unlock;
			/*** STEP 3 */
			sts = mfxENC.EncodeFrameAsync(&m_encodeCtrl, pmfxSurfaces[nEncSurfIdx], &mfxBS, &syncp);

In this section, we have seen how to add payloads to the AVC and MPEG2 encode stream. You can verify the payloads are added by opening the output bitstream in an editor and viewing in HEX mode (for instance, GVim). You should see each frame carrying a payload that has the "Frame: <framenum>" along with the payload meta information. Below is an encoded out.h264 file that uses the above-mentioned code snippet in simple_6_encode_vmem_lowlatency tutorial to add CC captions. You can see the SEI message "Frame: 6" highlighted.

Retrieving Messages from the Decoded Stream

We just saw how to add CC messages to the encode stream. In this section, we will see how to retrieve the encoded SEI/userData messages from the AVC or MPEG2 streams respectively. The SDK provides an API GetPayload() for this purpose, and we will illustrate how to use this for AVC and MPEG cases. The GetPayload() function follows the DecodeFrameAsync() function call, and returns the mfxPayload structure populated with the message, number of bytes and timestamp. Please note that you are required to initialize the BufSize and Data members of the structure before you call the GetPayload() function.

sts = mfxDEC.DecodeFrameAsync(&mfxBS, pmfxSurfaces[nIndex], &pmfxOutSurface, &syncp);

	mfxPayload dec_payload;
	mfxU64 ts;
	dec_payload.Data = new mfxU8[100];
	dec_payload.BufSize = MESSAGE_SIZE;
	dec_payload.NumBit = 1;

	/* Since the decode function is asynchronous, we will loop over the GetPayload() function until we drain all the messages ready */
	while(dec_payload.NumBit > 0)
		mfxStatus st = mfxDEC.GetPayload(&ts, &dec_payload);
		#ifdef AVC
		#ifdef MPEG2
				fwrite(dec_payload.Data, sizeof(unsigned char), dec_payload.BufSize, stdout); //For debug purpose - Prints out the payload on to the screen.

Below is the output of the fwrite for decode of out.h264 file we encoded above. On the console is printed the output of the GetPayload() call for each frame.

This concludes our howto add and retrieve CC messages in AVC and MPEG2 stream. You can also refer to our documentation section 4.14 for the pseudo-code, and for information on the APIs, you can refer to the mediasdk-man.pdf in the doc folder of the Media SDK install folder.

For more complete information about compiler optimizations, see our Optimization Notice.



Hi Sravanthi K,

I am beginner to "Media Server Studio" and it's capabilities. I am doing some PoC based on your post but I am not able to reach to the level which you mentioned.

I have modified "simple_6_encode_vmem_lowlatency/src/simple_encode_vmem_lowlat.cpp" file as you requested in "Stage 1: Main encoding loop" (While loop) and also passed the prepared m_encodeCtrl structure mfxENC.EncodeFrameAsync function as "sts = mfxENC.EncodeFrameAsync(&m_encodeCtrl, pmfxSurfaces[nEncSurfIdx], &mfxBS, &syncp);" But I am getting ERROR -16 "Undefined behaviour"

See long below:

libva info: VA-API version 0.99.0
libva info: va_getDriverName() returns 0
libva info: User requested driver 'iHD'
libva info: Trying to open /opt/intel/mediasdk/lib64/
libva info: Found init function __vaDriverInit_0_32
libva info: va_openDriver() returns 0

 Undefined behavior. src/simple_encode_vmem_lowlat.cpp 387

The default application is working fine and H.264 file is generated very well.

Below is my CLI:

simple_encode_vmem_lowlat -auto -g 720x480 -b 5000 -f 60/60 In_720x480_i60.yuv out.h264

Also tried with -hw and -sw options but w/o success. Please Advice.




The reference links are giving 404. For AVC, they moved it to section 8.1 in the following SCTE document:

For now, it is worthwhile to mention this - In our above-mentioned example, we are using lowlatency parameters (GopSize is 1 and async depth is 1). If you set the gop size to be >1 (I, B, P frames), the developer is required to ensure that the correct sei messages are encoded with the frames. To quote form documentation - 

During the execution of an asynchronous pipeline, the application must consider the input data in use and must not change it until the execution has completed. The application must also consider output data unavailable until the execution has finished. In addition, for encoders, the application must consider extended and payload buffers in use while the input surface is locked.

Hello Swati - Can you use Elecard or other stream analyzer tools out there to ensure you are able to see the sei_message() being encoded correctly? If you can see them in these tools, then enabling CC in the player. If you have more questions on this, please feel free to use our support forum to post them, and we can help you.

VideoReDo will play 608/708 captions in an H.264 file.

However, there's are errors in the SEI sample code as the CC flag and count byte are not being included in the payload, and the ATSC identifier bytes are reversed.

I'm also having trouble with the encoder not putting in the SEI captions in the correct order, per my comment here:

I can see payload data in output h264 file but when I tried to play, it is not showing any captions. Do we have to add any thing else so that player will identify there is captions in video file?

Add a Comment

Have a technical question? Visit our forums. Have site or software product issues? Contact support.