Getting Started with Intel® IPP Unified Media Classes Sample


Overview

The Unified Media Classes (UMC) are a set of unified C++ interfaces for building the standard blocks of a media application, including codecs, renderers, splitters, etc. UMC include abstract base classes, on the basis of which cross-platform and cross-OS derivative classes are built as well as OS-dependent renderers and readers.

This guide shows you to use some basic UMC class to build simple decoder and encoder. For more information, user can refer to the UMC sample code Reference Manual at sample code folder \audio-video-codecs\doc and UMC sample code applications at \audio-video-codecs\application.

The contents are integrated to latest IPP UMC sample and the UMC sample in Intel IPP samples for Parallel Studio 2011 if you are Intel Parallel Studio user.

Creating a simple decoder

UMC::VideoDecoder class provides basic interfaces for video decoders. It includes methods that perform initialization of the decoder, decompression of the frame, and destroying of the decoder, etc. Users need to use derived class from UMC::VideoDecoder (e.g UMC::H263VideoDecoder,UMC::H264VideoDecoder, UMC::MPEG4VideoDecoder) to complete a video decoding process.

The following are some steps to use a UMC decoder:

1. Set decoding parameters UMC::VideoDecoderParams
2. Initialize an VideoDecoder class with UMC::VideoDecoderParams
3. Start to get the decoded frame by calling GetFrame() function in VideoDecoder class
4. Get buffered decoded frames left by decoder by calling GetFrame() with NULL input pointer.

The sample code bellow decodes a raw H.264 video stream in “cVideoData”, put decoded YUV data into “cYUVData” buffer, and set image size and decoded frame number. Please click here to get full sample code.

void DecodeStream( Ipp8u *cVideoData,int VideoDataSize,
                   Ipp8u *cYUVData, int& imgWidth, int & imgHeight, int & frameNumber )
{
    UMC::Status status; UMC::MediaData  DataIn; UMC::VideoData  DataOut;
    UMC::VideoDecoderParams Params;
    UMC::H264VideoDecoder H264Decoder;
    int  frameSize=0;
		 
    DataIn.SetBufferPointer(cVideoData,VideoDataSize);
    DataIn.SetDataSize(VideoDataSize);
  
   //use default paramter, threading number=1
    Params.m_pData = &DataIn;  Params.lFlags=0;  Params.numThreads=1; 
    if(status = H264Decoder.Init(&Params)!=UMC::UMC_OK)
	    return;
	    
     H264Decoder.GetInfo(&Params); 
     imgWidth=Params.info.clip_info.width; imgHeight=Params.info.clip_info.height;


    frameSize =  imgWidth*imgHeight*3/2;

     DataOut.Init(imgWidth,imgHeight,UMC::YV12,8);  
     DataOut.SetBufferPointer(cYUVData,frameSize);  

     int exit_flag=0; frameNumber=0;       
     do{    status  = H264Decoder.GetFrame(&DataIn, &DataOut);
             if (status  == UMC::UMC_OK){
                      cYUVData += (frameSize);
                      DataOut.SetBufferPointer(frameSize);
                      frameNumber++;
              }  	 
              if((status  !=UMC::UMC_OK)||(frameNumber >=MAXFRAME))
              exit_flag = 1;
             
       }while (exit_flag!=1);


     do{  status  = H264Decoder.GetFrame(NULL, &DataOut);
            if (status  == UMC::UMC_OK)  { 
                        cYUVData += (frameSize);
                        DataOut.SetBufferPointer(cYUVData,frameSize);
                        frameNumber++;
             }
       }while(status  == UMC::UMC_OK);	
       return;
}


Building a simple encoder

UMC::VideoEncoder class provides interfaces for video encoders. To complete UMC a video encoding process, users need to use derived class from UMC::VideoEncoder (e.g UMC::H264VideoEncoder,UMC::MEPG2VideoEncoder, UMC::MPEG4VideoEncoder) to compress input data.

The following are some basic steps to use a UMC encoder:

1.Set encode parameters UMC::VideoEncoderParams or its derived class
2.Initializes a VideoEncoder class.
3.Get each encoded frame by calling VideoEncoder function GetFrame(). Encoding is performed frame by frame

The example code bellow encodes the YUV data in the memory buffer “cYUVData “ into H.264 video streaming. The encoded h.264data is put into “cVideoData”. Please check here to get the complete sample code.

void EncodeStream(Ipp8u *cYUVData, int imgWidth, int imgHeight, int frameNumber,
                  Ipp8u *cVideoData,int &VideoDataSize ) 
{

    UMC::Status status;
    UMC::MediaData  DataOut; UMC::VideoData DataIn;
    UMC::H264EncoderParams Params;
    UMC::H264VideoEncoder H264Encoder;; 
    int FrameSize;

    Params.key_frame_controls.method=1;
    Params.info.clip_info.height=imgHeight;
    Params.info.clip_info.width=imgWidth;
    Params.info.bitrate = 1000000;
    Params.numThreads = 1; 

    if((status = H264Encoder.Init(&Params))!=UMC::UMC_OK)
          return;

    FrameSize = imgWidth*imgHeight*3/2; 
    DataIn.Init(imgWidth,imgHeight,UMC::YV12,8);
    DataIn.SetBufferPointer(cYUVData,FrameSize);
    DataIn.SetDataSize(FrameSize);

    DataOut.SetBufferPointer(cVideoData,MAXVIDEOSIZE);

    VideoDataSize=0;
    int nEncodedFrames=0;
    while ( nEncodedFrames < frameNumber)
    {
            status = H264Encoder.GetFrame(&DataIn, &DataOut);
            if (status == UMC::UMC_OK)
            {   
               nEncodedFrames++;
               VideoDataSize+=DataOut.GetDataSize();
               DataOut.MoveDataPointer(DataOut.GetDataSize());
	   
               cYUVData+=FrameSize;
               DataIn.SetBufferPointer(cYUVData,FrameSize);
               DataIn.SetDataSize(FrameSize);
             
            }
    }
    return;    
}


Using a splitter

Intel IPP UMC splitter provides users ability to separates audio, video, and auxiliary elementary streams (ES) from several file formats (MPEG TS stream, MP4 and AVI). The splitter class needs to receive data from UMC::DataReader interface.

The following are some basic steps to use a UMC splitter:

1. Set splitter parameters UMC::SplitterParams
2. call Init() function to perform initialization of the splitter according to the input parameter.
3. Call GetInfo() method to collect information on the processed stream.
4. Call Run() function to run the internal processing thread(s).
4. Call GetNextData() and CheckNextData() methods to obtain the next data chunk from the stream.

The example code bellow uses MP4 splitter to read H.264 video data from the input file. Please click here to get the complete sample code. This sample uses MP4 splitter to separate H.264 video and uses H.264 decoder to decode the video stream.
void EncodeStream(vm_char * inputfilename, vm_char * outputfilename )
{
   Ipp32u videoTrack=0; int exit_flag =0;
   UMC::Status status;  
   UMC::MediaData in; UMC::VideoData out;	
   UMC::FIOReader reader; UMC::FileReaderParams readerParams;
   UMC::SplitterParams splitterParams; UMC::SplitterInfo * streamInfo;
   UMC::MP4Splitter Splitter;
	
   UMC::VideoStreamInfo *videoInfo=NULL;
   UMC::VideoDecoder *  videoDecoder; UMC::VideoDecoderParams videoDecParams;
   UMC::FWVideoRender fwRender; UMC::FWVideoRenderParams fwRenderParams;
   
   readerParams.m_portion_size = 0;
   vm_string_strcpy(readerParams.m_file_name, inputfilename);
   if((status = reader.Init(&readerParams))!= UMC::UMC_OK) 
          return;
   splitterParams.m_lFlags = UMC::VIDEO_SPLITTER;
   splitterParams.m_pDataReader = &reader;
   if((status = Splitter.Init(splitterParams))!= UMC::UMC_OK)
          return;
   Splitter.GetInfo(&streamInfo);
   for (videoTrack = 0; videoTrack <  streamInfo->m_nOfTracks; videoTrack++) {
         if (streamInfo->m_ppTrackInfo[videoTrack]->m_Type == UMC::TRACK_H264)
              break;
   }
   videoInfo = (UMC::VideoStreamInfo*)(streamInfo->m_ppTrackInfo[videoTrack]->
                       m_pStreamInfo);
   if(videoInfo->stream_type!=UMC::H264_VIDEO)
         return;
   videoDecParams.info =  (*videoInfo);
   videoDecParams.m_pData = streamInfo->m_ppTrackInfo[videoTrack]->m_pDecSpecInfo;
   videoDecParams.numThreads = 1;
   videoDecoder = (UMC::VideoDecoder*)(new UMC::H264VideoDecoder());
   if((status = videoDecoder->Init(&videoDecParams))!= UMC::UMC_OK)
          return;
   fwRenderParams.out_data_template.Init(videoInfo->clip_info.width, videoInfo->clip_info.height,
                                                             videoInfo->color_format);
   fwRenderParams.pOutFile = outputfilename;
   if(status = fwRender.Init(&fwRenderParams)!= UMC::UMC_OK)
          return;
   Splitter.Run();
   do{
     do{
           if (in.GetDataSize() < 4) {
                do{    status= Splitter.GetNextData(&in,videoTrack);
                         if(status==UMC::UMC_ERR_NOT_ENOUGH_DATA)
                            vm_time_sleep(5);
                 }while(status==UMC::UMC_ERR_NOT_ENOUGH_DATA);
                 if(((status != UMC::UMC_OK) && (status != UMC::UMC_ERR_END_OF_STREAM))||
                        (status == UMC::UMC_ERR_END_OF_STREAM)&& (in.GetDataSize()<4)) {
                              exit_flag=1;
                 }
            }
            fwRender.LockInputBuffer(&out);
            videoDecoder->GetFrame(&in,&out);
            fwRender.UnLockInputBuffer(&out);
            fwRender.RenderFrame();
    }while(!exit_flag&&(status ==UMC::UMC_ERR_NOT_ENOUGH_DATA||status==UMC::UMC_ERR_SYNC));
  }while (exit_flag!=1);

  do{  
          fwRender.LockInputBuffer(&out);
          status  = videoDecoder->GetFrame(NULL,&out);
          fwRender.UnLockInputBuffer(&out);
          fwRender.RenderFrame();
   }while(status == UMC::UMC_OK);			
}



Using a muxer

Intel IPP UMC sample code provides MPEG2 and MP4 muxer classes, which enable users to multiplex audio, video and other elementary streams (ES) into a container.

The following are some steps to use a UMC Muxer class:

1. Set muxer parameter UMC::MPEG2MuxerParams or UMC::MP4MuxerParams
2. Call Init() function to initialize Muxer according to the input parameter.
3. Call PutVideoData()/PutAudioData() to copy video or audio data into output buffer.
4. Call Close() to flushes all samples remaining in the internal buffers, closes the muxer.

The example code bellow uses MPEG2 TS muxer to put H.264 video stream into TS stream. Check here to get full sample code.
void EncodeStream (Ipp8u *cYUVData, int imgWidth, int imgHeight, int frameNumber,
                             vm_char * tsFileName)    
{   
    UMC::Status status;   
    UMC::MediaData  DataOut, MuxData; UMC::VideoData DataIn;   
    UMC::H264EncoderParams Params;   UMC::H264VideoEncoder H264Encoder;    
    UMC::MPEG2Muxer Muxer;  UMC::MPEG2MuxerParams MuxerParams;   
    Ipp8u *cMaxVideoData=NULL;   
    int FrameSize;
  
    UMC::VideoStreamInfo VideoInfo;   
       
    UMC::FileWriter Writer;  UMC::FileWriterParams WriterParams;   
    strcpy(WriterParams.m_file_name, tsFileName);   
    Writer.Init(&WriterParams);   
  
    MuxerParams.m_lpDataWriter = &Writer;   
    MuxerParams.m_SystemType = UMC:: MPEG2_TRANSPORT_STREAM;   
    MuxerParams.m_nNumberOfTracks = 1;   
    MuxerParams.pTrackParams = new UMC::TrackParams[MuxerParams.m_nNumberOfTracks];   
    
    VideoInfo.clip_info.height=imgHeight; VideoInfo.clip_info.width=imgWidth;   
    VideoInfo.stream_type=UMC::H264_VIDEO;                
    VideoInfo.color_format=UMC::YV12; VideoInfo.interlace_type=UMC::PROGRESSIVE; 
    VideoInfo.bitrate=1000000;  VideoInfo.streamPID=0;    
  
    MuxerParams.pTrackParams[0].type = UMC::VIDEO_TRACK;   
    MuxerParams.pTrackParams[0].info.video = &VideoInfo;   
    MuxerParams.pTrackParams[0].bufferParams.m_prefInputBufferSize=2000000;   
    MuxerParams.pTrackParams[0].bufferParams.m_prefOutputBufferSize=2000000;   
  
    if((status =Muxer.Init(&MuxerParams))!=UMC::UMC_OK)   
         return;   
     
    Params.info.clip_info.height=imgHeight; Params.info.clip_info.width=imgWidth;   
    Params.info.bitrate = 1000000;   
    Params.numThreads = 1;    
    if((status = H264Encoder.Init(&Params))!=UMC::UMC_OK)   
         return;   
    
    FrameSize = imgWidth*imgHeight*3/2;
    cMaxVideoData= ippsMalloc_8u(MAXAFRAMESIZE);   
    DataIn.Init(imgWidth,imgHeight,UMC::YV12,8);   
    DataIn.SetBufferPointer(cYUVData,FrameSize);   
    DataIn.SetDataSize(FrameSize);   
    DataOut.SetBufferPointer(cMaxVideoData,MAXAFRAMESIZE);   
    
    int nEncodedFrames=0;   
    while ( nEncodedFrames < frameNumber)   
    {      
       status = H264Encoder.GetFrame(&DataIn, &DataOut);
       if (status == UMC::UMC_OK) { 
        nEncodedFrames++;
        MuxData.SetBufferPointer((Ipp8u*)DataOut.GetBufferPointer(),DataOut.GetDataSize());
        memcpy(MuxData.GetDataPointer(),DataOut.GetDataPointer(), DataOut.GetDataSize());
        MuxData.SetDataSize(DataOut.GetDataSize());
        MuxData.SetTime(nEncodedFrames*((double)1.0)/30); 
         do {  
                status = Muxer.PutVideoData(&MuxData);   
                if (UMC::UMC_ERR_NOT_ENOUGH_BUFFER == status)   
                     vm_time_sleep(5);   
         }while (UMC::UMC_ERR_NOT_ENOUGH_BUFFER == status); 
         cYUVData+=FrameSize;
         DataIn.SetBufferPointer(cYUVData,FrameSize);
         DataIn.SetDataSize(FrameSize);
         DataOut.SetBufferPointer(cMaxVideoData,MAXAFRAMESIZE);   
       }   
    }   
    Muxer.Close();   
    return;       
} 

YUV(I420) and Y, U, V seperate plane

Please note, here the YUVData (I420) is stored in single consecutive array. And if your YUV data are stored in 3 planar array, then you may amend the above code by UMC function SetPlanePointer(*p, iPlanceNumber) for Y, U, V plane, 
for example, in encoder sample, 

Change

DataIn.SetBufferPointer(cYUVData,imgWidth*imgHeight*3/2);

to 
// Compared to I420 (YUV) plans are in YV12 (YVU)
//LPBYTE pSrcY = cYUVData;
//LPBYTE pSrcU = pDestY + (imgWidth * imgHeight);
//LPBYTE pSrcV = pDestU + ((imgWidth * imgHeight) / 4);

DataIn.SetPlanePointer(pSrcY, 0); // Y
DataIn.SetPlanePointer(pSrcU, 1); // U
DataIn.SetPlanePointer(pSrcV, 2); // V
Per informazioni complete sulle ottimizzazioni del compilatore, consultare l'Avviso sull'ottimizzazione