Recherche avancée

Médias (3)

Mot : - Tags -/Valkaama

Autres articles (15)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

Sur d’autres sites (2341)

  • Video created using H263 codec and ffmpeg does not play on android device [closed]

    21 mars 2013, par susheel tickoo

    I have created a video using FFmpeg and H263 codec. But when I play the video on an Android device the player is unable to play it. I have used both the extensions .mp4 and .3gp.

     void  generate(JNIEnv *pEnv, jobject pObj,jobjectArray stringArray,int famerate,int width,int height,jstring videoFilename)
       {
           AVCodec *codec;
           AVCodecContext *c= NULL;
           //int framesnum=5;
           int i,looper, out_size, size, x, y,encodecbuffsize,j;
           __android_log_write(ANDROID_LOG_INFO, "record","************into generate************");
           int imagecount= (*pEnv)->GetArrayLength(pEnv, stringArray);
           __android_log_write(ANDROID_LOG_INFO, "record","************got magecount************");
           int retval=-10;

           FILE *f;
           AVFrame *picture,*encoded_avframe;
           uint8_t  *encodedbuffer;
           jbyte *raw_record;
           char logdatadata[100];




           int returnvalue = -1,numBytes =-1;
           const char *gVideoFileName = (char *)(*pEnv)->GetStringUTFChars(pEnv, videoFilename, NULL);
           __android_log_write(ANDROID_LOG_INFO, "record","************got video file name************");

           /* find the mpeg1 video encoder */
           codec = avcodec_find_encoder(CODEC_ID_H264);
           if (!codec) {
               __android_log_write(ANDROID_LOG_INFO, "record","codec not found");
               exit(1);
           }
           c= avcodec_alloc_context();
           /*c->bit_rate = 400000;

           c->width = width;
           c->height = height;

           c->time_base= (AVRational){1,famerate};
           c->gop_size = 12; // emit one intra frame every ten frames
           c->max_b_frames=0;
           c->pix_fmt = PIX_FMT_YUV420P;
           c->codec_type = AVMEDIA_TYPE_VIDEO;
           c->codec_id = CODEC_ID_H263;*/

            c->bit_rate = 400000;
               // resolution must be a multiple of two
               c->width = 176;
               c->height = 144;
                   c->pix_fmt = PIX_FMT_YUV420P;


               c->qcompress = 0.0;
               c->qblur = 0.0;
               c->gop_size = 20;  //or 1
               c->sub_id = 1;
               c->workaround_bugs = FF_BUG_AUTODETECT;

               //pFFmpeg->c->time_base = (AVRational){1,25};
               c->time_base.num = 1;
               c->time_base.den = famerate;
               c->max_b_frames = 0; //pas de B frame en H263

              // c->opaque = opaque;
               c->dct_algo = FF_DCT_AUTO;
               c->idct_algo = FF_IDCT_AUTO;
               //lc->rtp_mode = 0;
               c->rtp_payload_size = 1000;
               c->rtp_callback = 0; // ffmpeg_rtp_callback;


               c->flags |= CODEC_FLAG_QSCALE;
               c->mb_decision = FF_MB_DECISION_RD;
               c->thread_count = 1;
           #define DEFAULT_RATE    (16 * 8 * 1024)
               c->rc_min_rate = DEFAULT_RATE;
               c->rc_max_rate = DEFAULT_RATE;
               c->rc_buffer_size = DEFAULT_RATE * 64;
               c->bit_rate = DEFAULT_RATE;                    


           sprintf(logdatadata, "------width from c ---- = %d",width);
           __android_log_write(ANDROID_LOG_INFO, "record",logdatadata);
           sprintf(logdatadata, "------height from c ---- = %d",height);
           __android_log_write(ANDROID_LOG_INFO, "record",logdatadata);

           __android_log_write(ANDROID_LOG_INFO, "record","************Found codec and now opening it************");
           /* open it */
           retval = avcodec_open(c, codec);
           if ( retval < 0)
           {
               sprintf(logdatadata, "------avcodec_open ---- retval = %d",retval);
               __android_log_write(ANDROID_LOG_INFO, "record",logdatadata);
               __android_log_write(ANDROID_LOG_INFO, "record","could not open codec");
               exit(1);
           }
           __android_log_write(ANDROID_LOG_INFO, "record","statement 5");
           f = fopen(gVideoFileName, "ab");

           if (!f) {
               __android_log_write(ANDROID_LOG_INFO, "record","could not open video file");
               exit(1);
           }

           __android_log_write(ANDROID_LOG_INFO, "record", "***************Allocating encodedbuffer*********\n");
           encodecbuffsize = avpicture_get_size(PIX_FMT_RGB24, c->width, c->height);

           sprintf(logdatadata, "encodecbuffsize = %d",encodecbuffsize);
           __android_log_write(ANDROID_LOG_INFO, "record",logdatadata);
           encodedbuffer = malloc(encodecbuffsize);

           jclass cls = (*pEnv)->FindClass(pEnv, "com/canvasm/mediclinic/VideoGenerator");
           jmethodID mid = (*pEnv)->GetMethodID(pEnv, cls, "videoProgress", "(Ljava/lang/String;)Ljava/lang/String;");
           jmethodID mid_delete = (*pEnv)->GetMethodID(pEnv, cls, "deleteTempFile", "(Ljava/lang/String;)Ljava/lang/String;");

           if (mid == 0)
               return;

           __android_log_write(ANDROID_LOG_INFO, "native","got method id");


           for(i=0;i<=imagecount;i++) {

               jboolean isCp;
               int progress = 0;
               float temp;
               jstring string;
               if(i==imagecount)
                   string = (jstring) (*pEnv)->GetObjectArrayElement(pEnv, stringArray, imagecount-1);
               else
                   string = (jstring) (*pEnv)->GetObjectArrayElement(pEnv, stringArray, i);

               const char *rawString = (*pEnv)->GetStringUTFChars(pEnv, string, &isCp);

               __android_log_write(ANDROID_LOG_INFO, "record",rawString);
               picture = OpenImage(rawString,width,height);
               //WriteJPEG(c,picture,i);
               //   encode video
               memset(encodedbuffer,0,encodecbuffsize);

               //do{

               for(looper=0;looper<5;looper++)
               {
                   memset(encodedbuffer,0,encodecbuffsize);
                   out_size = avcodec_encode_video(c, encodedbuffer, encodecbuffsize, picture);
                   sprintf(logdatadata, "avcodec_encode_video ----- out_size = %d \n",out_size );
                   __android_log_write(ANDROID_LOG_INFO, "record",logdatadata);
                   if(out_size>0)
                       break;
               }
               __android_log_write(ANDROID_LOG_INFO, "record","*************Start looping for same image*******");
               returnvalue = fwrite(encodedbuffer, 1, out_size, f);
               sprintf(logdatadata, "fwrite ----- returnvalue = %d \n",returnvalue );
               __android_log_write(ANDROID_LOG_INFO, "record",logdatadata);

               __android_log_write(ANDROID_LOG_INFO, "record","*************End looping for same image*******");

               // publishing progress
               progress = ((i*100)/(imagecount+1))+15;//+1 is for last frame duplicated entry
               if(progress<20 )
                   progress =20;
               if(progress>=95 )
                   progress =95;

               sprintf(logdatadata, "%d",progress );
               jstring jstrBuf = (*pEnv)->NewStringUTF(pEnv, logdatadata);
               (*pEnv)->CallObjectMethod(pEnv, pObj, mid,jstrBuf);

               if(i>0)
                   (*pEnv)->CallObjectMethod(pEnv, pObj, mid_delete,string);

           }


           /* get the delayed frames */
           for(; out_size; i++) {
               fflush(stdout);
               out_size = avcodec_encode_video(c, encodedbuffer, encodecbuffsize, NULL);
               fwrite(encodedbuffer, 20, out_size, f);
           }

           /* add sequence end code to have a real mpeg file */
           encodedbuffer[0] = 0x00;
           encodedbuffer[1] = 0x00;
           encodedbuffer[2] = 0x01;
           encodedbuffer[3] = 0xb7;
           fwrite(encodedbuffer, 1, 4, f);
           fclose(f);
           free(encodedbuffer);
           avcodec_close(c);
           av_free(c);
           __android_log_write(ANDROID_LOG_INFO, "record","Video created ");

           // last updation of 100%
           sprintf(logdatadata, "%d",100 );
           jstring jstrBuf = (*pEnv)->NewStringUTF(pEnv, logdatadata);
           (*pEnv)->CallObjectMethod(pEnv, pObj, mid,jstrBuf);
       }



       AVFrame* OpenImage(const char* imageFileName,int w,int h)
       {
           AVFrame *pFrame;
           AVCodec *pCodec ;
           AVFormatContext *pFormatCtx;
           AVCodecContext *pCodecCtx;
           uint8_t *buffer;
           int frameFinished,framesNumber = 0,retval = -1,numBytes=0;
           AVPacket packet;
           char logdatadata[100];
           //__android_log_write(ANDROID_LOG_INFO, "OpenImage",imageFileName);
           if(av_open_input_file(&pFormatCtx, imageFileName, NULL, 0, NULL)!=0)
           //if(avformat_open_input(&pFormatCtx,imageFileName,NULL,NULL)!=0)
           {
               __android_log_write(ANDROID_LOG_INFO, "record",
                       "Can't open image file ");
               return NULL;
           }

           pCodecCtx = pFormatCtx->streams[0]->codec;
           pCodecCtx->width = w;
           pCodecCtx->height = h;
           pCodecCtx->pix_fmt = PIX_FMT_YUV420P;

           // Find the decoder for the video stream
           pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
           if (!pCodec)
           {
               __android_log_write(ANDROID_LOG_INFO, "record",
                       "Can't open image file ");
               return NULL;
           }

           pFrame = avcodec_alloc_frame();

           numBytes = avpicture_get_size(PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height);
           buffer = (uint8_t *) av_malloc(numBytes * sizeof(uint8_t));
           sprintf(logdatadata, "numBytes  = %d",numBytes);
           __android_log_write(ANDROID_LOG_INFO, "record",logdatadata);

           retval = avpicture_fill((AVPicture *) pFrame, buffer, PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height);

           // Open codec
           if(avcodec_open(pCodecCtx, pCodec)<0)
           {
               __android_log_write(ANDROID_LOG_INFO, "record","Could not open codec");
               return NULL;
           }

           if (!pFrame)
           {
               __android_log_write(ANDROID_LOG_INFO, "record","Can't allocate memory for AVFrame\n");
               return NULL;
           }
           int readval = -5;
           while (readval = av_read_frame(pFormatCtx, &packet) >= 0)
           {
               if(packet.stream_index != 0)
                   continue;

               int ret = avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
               sprintf(logdatadata, "avcodec_decode_video2 ret = %d",ret);
               __android_log_write(ANDROID_LOG_INFO, "record",logdatadata);

               if (ret > 0)
               {
                   __android_log_write(ANDROID_LOG_INFO, "record","Frame is decoded\n");
                   pFrame->quality = 4;
                   av_free_packet(&packet);
                   av_close_input_file(pFormatCtx);
                   return pFrame;
               }
               else
               {
                   __android_log_write(ANDROID_LOG_INFO, "record","error while decoding frame \n");
               }
           }
           sprintf(logdatadata, "readval = %d",readval);
           __android_log_write(ANDROID_LOG_INFO, "record",logdatadata);
       }

    The generate method takes a list of strings (path to images) and converts them to video and the OpenImage method is responsible for convertign a single image to AVFrame.

  • Android FFmpeg Video Player

    11 mars 2013, par Dilip

    I want to play video using FFmpeg for this have used some code,But it open file but not drawing frames thowing Unhandled page fault exception.

    Java Code :

    public class MainActivity extends Activity {
       private static native void openFile();

       private static native void drawFrame(Bitmap bitmap);

       private static native void drawFrameAt(Bitmap bitmap, int secs);

       private Bitmap mBitmap;
       private int mSecs = 0;

       static {
           System.loadLibrary("ffmpegutils");
       }

       /** Called when the activity is first created. */
       @Override
       public void onCreate(Bundle savedInstanceState) {
           super.onCreate(savedInstanceState);
           // setContentView(new VideoView(this));
           setContentView(R.layout.main);

           mBitmap = Bitmap.createBitmap(320, 240, Bitmap.Config.ARGB_8888);
           openFile();

           Button btn = (Button) findViewById(R.id.frame_adv);
           btn.setOnClickListener(new OnClickListener() {
               public void onClick(View v) {
                   try {
                       drawFrame(mBitmap);
                       ImageView i = (ImageView) findViewById(R.id.frame);
                       i.setImageBitmap(mBitmap);
                   } catch (Exception e) {
                       e.printStackTrace();
                   }
               }
           });
    }}

    Jni code :

    #include
    #include
    #include
    #include <android></android>log.h>
    #include <android></android>bitmap.h>

    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libswscale></libswscale>swscale.h>

    #define  LOG_TAG    "FFMPEGSample"
    #define  LOGI(...)  __android_log_print(ANDROID_LOG_INFO,LOG_TAG,__VA_ARGS__)
    #define  LOGE(...)  __android_log_print(ANDROID_LOG_ERROR,LOG_TAG,__VA_ARGS__)

    /* Cheat to keep things simple and just use some globals. */
    AVFormatContext *pFormatCtx;
    AVCodecContext *pCodecCtx;
    AVFrame *pFrame;
    AVFrame *pFrameRGB;
    int videoStream;

    /*
    * Write a frame worth of video (in pFrame) into the Android bitmap
    * described by info using the raw pixel buffer.  It&#39;s a very inefficient
    * draw routine, but it&#39;s easy to read. Relies on the format of the
    * bitmap being 8bits per color component plus an 8bit alpha channel.
    */

    static void fill_bitmap(AndroidBitmapInfo* info, void *pixels, AVFrame *pFrame) {
       uint8_t *frameLine;

       int yy;
       for (yy = 0; yy &lt; info->height; yy++) {
           uint8_t* line = (uint8_t*) pixels;
           frameLine = (uint8_t *) pFrame->data[0] + (yy * pFrame->linesize[0]);

           int xx;
           for (xx = 0; xx &lt; info->width; xx++) {
               int out_offset = xx * 4;
               int in_offset = xx * 3;
               line[out_offset] = frameLine[in_offset];
               line[out_offset + 1] = frameLine[in_offset + 1];
               line[out_offset + 2] = frameLine[in_offset + 2];
               line[out_offset + 3] = 0;
           }
           pixels = (char*) pixels + info->stride;
       }
    }

    void Java_com_churnlabs_ffmpegsample_MainActivity_openFile(JNIEnv * env,
           jobject this) {
       int ret;
       int err;
       int i;
       AVCodec *pCodec;
       uint8_t *buffer;
       int numBytes;

       av_register_all();
       LOGE("Registered formats***********************************");
       err = av_open_input_file(&amp;pFormatCtx, "file:///mnt/sdcard/android.3gp",
               NULL, 0, NULL);
       LOGE("Called open file***************************************************");
       if (err != 0) {
           LOGE(
                   "Couldn&#39;t open file****************************************************");
           return;
       }
       LOGE(
               "Opened file***********************************************************");

       if (av_find_stream_info(pFormatCtx) &lt; 0) {
           LOGE(
                   "Unable to get stream info*****************************************");
           return;
       }

       videoStream = -1;
       for (i = 0; i &lt; pFormatCtx->nb_streams; i++) {
           if (pFormatCtx->streams[i]->codec->codec_type == CODEC_TYPE_VIDEO) {
               videoStream = i;
               break;
           }
       }
       if (videoStream == -1) {
           LOGE("Unable to find video stream");
           return;
       }

       LOGI("Video stream is [%d]", videoStream);

       pCodecCtx = pFormatCtx->streams[videoStream]->codec;

       pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
       if (pCodec == NULL) {
           LOGE("Unsupported codec**********************************************");
           return;
       }

       if (avcodec_open(pCodecCtx, pCodec) &lt; 0) {
           LOGE("Unable to open codec***************************************");
           return;
       }

       pFrame = avcodec_alloc_frame();
       pFrameRGB = avcodec_alloc_frame();
       LOGI("Video size is [%d x %d]", pCodecCtx->width, pCodecCtx->height);

       numBytes = avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width,
               pCodecCtx->height);
       buffer = (uint8_t *) av_malloc(numBytes * sizeof(uint8_t));

       avpicture_fill((AVPicture *) pFrameRGB, buffer, PIX_FMT_RGB24,
               pCodecCtx->width, pCodecCtx->height);
    }

    void Java_com_churnlabs_ffmpegsample_MainActivity_drawFrame(JNIEnv * env,
           jobject this, jstring bitmap) {
       AndroidBitmapInfo info;
       void* pixels;
       int ret;

       int err;
       int i;
       int frameFinished = 0;
       AVPacket packet;
       static struct SwsContext *img_convert_ctx;
       int64_t seek_target;

       if ((ret = AndroidBitmap_getInfo(env, bitmap, &amp;info)) &lt; 0) {
           LOGE("AndroidBitmap_getInfo() failed ! error=%d", ret);
           return;
       }
       LOGE(
               "Checked on the bitmap*************************************************");

       if ((ret = AndroidBitmap_lockPixels(env, bitmap, &amp;pixels)) &lt; 0) {
           LOGE("AndroidBitmap_lockPixels() failed ! error=%d", ret);
       }
       LOGE(
               "Grabbed the pixels*******************************************************");

       i = 0;
       while ((i == 0) &amp;&amp; (av_read_frame(pFormatCtx, &amp;packet) >= 0)) {
           if (packet.stream_index == videoStream) {
               avcodec_decode_video2(pCodecCtx, pFrame, &amp;frameFinished, &amp;packet);

               if (frameFinished) {
                   LOGE("packet pts %llu", packet.pts);
                   // This is much different than the tutorial, sws_scale
                   // replaces img_convert, but it&#39;s not a complete drop in.
                   // This version keeps the image the same size but swaps to
                   // RGB24 format, which works perfect for PPM output.
                   int target_width = 320;
                   int target_height = 240;
                   img_convert_ctx = sws_getContext(pCodecCtx->width,
                           pCodecCtx->height, pCodecCtx->pix_fmt, target_width,
                           target_height, PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL,
                           NULL);
                   if (img_convert_ctx == NULL) {
                       LOGE("could not initialize conversion context\n");
                       return;
                   }
                   sws_scale(img_convert_ctx,
                           (const uint8_t* const *) pFrame->data, pFrame->linesize,
                           0, pCodecCtx->height, pFrameRGB->data,
                           pFrameRGB->linesize);

                   // save_frame(pFrameRGB, target_width, target_height, i);
                   fill_bitmap(&amp;info, pixels, pFrameRGB);
                   i = 1;
               }
           }
           av_free_packet(&amp;packet);
       }

       AndroidBitmap_unlockPixels(env, bitmap);
    }

    int seek_frame(int tsms) {
       int64_t frame;

       frame = av_rescale(tsms, pFormatCtx->streams[videoStream]->time_base.den,
               pFormatCtx->streams[videoStream]->time_base.num);
       frame /= 1000;

       if (avformat_seek_file(pFormatCtx, videoStream, 0, frame, frame,
               AVSEEK_FLAG_FRAME) &lt; 0) {
           return 0;
       }

       avcodec_flush_buffers(pCodecCtx);

       return 1;
    }

    Log Trace

    0): &lt;6>AUO_TOUCH: ts_irqHandler: before disable_irq()
    D/PrintK  (   57): &lt;6>AUO_TOUCH: ts_irqWorkHandler: P1(313,750),P2(0,0)
    D/PrintK  (    0): &lt;6>AUO_TOUCH: ts_irqHandler: before disable_irq()
    D/PrintK  (   57): &lt;6>AUO_TOUCH: ts_irqWorkHandler: P1(0,0),P2(0,0)
    E/FFMPEGSample( 2882): Checked on the bitmap*************************************************
    E/FFMPEGSample( 2882): Grabbed the pixels*******************************************************
    E/FFMPEGSample( 2882): packet pts 0
    F/PrintK  ( 2882): &lt;2>Exception!!! bs.ffmpegsample: unhandled page fault (11) at 0x0000000c, code 0x017
    F/PrintK  ( 2882): &lt;2>Exception!!! bs.ffmpegsample: unhandled page fault (11) at 0x0000000c, code 0x017
    I/DEBUG   (   86): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
    F/DEBUG   (   86): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
    I/DEBUG   (   86): Build fingerprint: &#39;dell/streak/streak/8x50:2.2.2/FRG83G/eng.cmbuild.20110317.163900:user/release-keys&#39;
    I/DEBUG   (   86): Exception!!! pid: 2882, tid: 2882  >>> com.churnlabs.ffmpegsample &lt;&lt;&lt;
    F/DEBUG   (   86): Exception!!! pid: 2882, tid: 2882  >>> com.churnlabs.ffmpegsample &lt;&lt;&lt;
    I/DEBUG   (   86): signal 11 (SIGSEGV), fault addr 0000000c
    F/DEBUG   (   86): signal 11 (SIGSEGV), fault addr 0000000c
    I/DEBUG   (   86):  r0 00000070  r1 00000000  r2 0024fca8  r3 afd42328
    F/DEBUG   (   86):  r0 00000070  r1 00000000  r2 0024fca8  r3 afd42328
    I/DEBUG   (   86):  r4 00000000  r5 00000000  r6 0000062c  r7 0000a000
    F/DEBUG   (   86):  r4 00000000  r5 00000000  r6 0000062c  r7 0000a000
    I/DEBUG   (   86):  r8 be9794f0  r9 428ab9d8  10 00000003  fp be979830
    F/DEBUG   (   86):  r8 be9794f0  r9 428ab9d8  10 00000003  fp be979830
    I/DEBUG   (   86):  ip ffffff90  sp be979448  lr afd0c633  pc afd0c320  cpsr 80000030
    F/DEBUG   (   86):  ip ffffff90  sp be979448  lr afd0c633  pc afd0c320  cpsr 80000030
    I/DEBUG   (   86):  d0  6472656767756265  d1  0000000000000000
    I/DEBUG   (   86):  d2  0000000000000000  d3  0000000044480000
    I/DEBUG   (   86):  d4  8000000000000000  d5  000000003f800000
    I/DEBUG   (   86):  d6  0000000000000000  d7  4448000043f00000
    I/DEBUG   (   86):  d8  0000000000000000  d9  0000000000000000
    I/DEBUG   (   86):  d10 0000000000000000  d11 0000000000000000
    I/DEBUG   (   86):  d12 0000000000000000  d13 0000000000000000
    I/DEBUG   (   86):  d14 0000000000000000  d15 0000000000000000
    I/DEBUG   (   86):  d16 0000000000000000  d17 0000000000000000
    I/DEBUG   (   86):  d18 0000000000000000  d19 0000000000000000
    I/DEBUG   (   86):  d20 3ff0000000000000  d21 8000000000000000
    I/DEBUG   (   86):  d22 0000000000000000  d23 0000000500010004
    I/DEBUG   (   86):  d24 0101010101010101  d25 0000000000000000
    I/DEBUG   (   86):  d26 0000000000000000  d27 0000000000000000
    I/DEBUG   (   86):  d28 0000000000000000  d29 3ff0000000000000
    I/DEBUG   (   86):  d30 0000000000000000  d31 3ff0000000000000
    I/DEBUG   (   86):  scr 80000012
    I/DEBUG   (   86):
    I/DEBUG   (   86):          #00  pc 0000c320  /system/lib/libc.so
    F/DEBUG   (   86):          #00  pc 0000c320  /system/lib/libc.so
    I/DEBUG   (   86):          #01  pc 0000c62e  /system/lib/libc.so
    F/DEBUG   (   86):          #01  pc 0000c62e  /system/lib/libc.so
    I/DEBUG   (   86):          #02  pc 0000cd3e  /system/lib/libc.so
    F/DEBUG   (   86):          #02  pc 0000cd3e  /system/lib/libc.so
    I/DEBUG   (   86):          #03  pc 0002d2c4  /system/lib/libskia.so
    F/DEBUG   (   86):          #03  pc 0002d2c4  /system/lib/libskia.so
    I/DEBUG   (   86):          #04  pc 000693ec  /system/lib/libskia.so
    F/DEBUG   (   86):          #04  pc 000693ec  /system/lib/libskia.so
    I/DEBUG   (   86):          #05  pc 00064d70  /system/lib/libskia.so
    F/DEBUG   (   86):          #05  pc 00064d70  /system/lib/libskia.so
    I/DEBUG   (   86):          #06  pc 0004dea8  /system/lib/libandroid_runtime.so
    F/DEBUG   (   86):          #06  pc 0004dea8  /system/lib/libandroid_runtime.so
    I/DEBUG   (   86):          #07  pc 00016df4  /system/lib/libdvm.so
    F/DEBUG   (   86):          #07  pc 00016df4  /system/lib/libdvm.so
    I/DEBUG   (   86):          #08  pc 00042904  /system/lib/libdvm.so
    F/DEBUG   (   86):          #08  pc 00042904  /system/lib/libdvm.so
    I/DEBUG   (   86):          #09  pc 0001bd58  /system/lib/libdvm.so
    F/DEBUG   (   86):          #09  pc 0001bd58  /system/lib/libdvm.so
    I/DEBUG   (   86):          #10  pc 00022550  /system/lib/libdvm.so
    F/DEBUG   (   86):          #10  pc 00022550  /system/lib/libdvm.so
    I/DEBUG   (   86):          #11  pc 000213f0  /system/lib/libdvm.so
    F/DEBUG   (   86):          #11  pc 000213f0  /system/lib/libdvm.so
    I/DEBUG   (   86):          #12  pc 00058c4a  /system/lib/libdvm.so
    F/DEBUG   (   86):          #12  pc 00058c4a  /system/lib/libdvm.so
    I/DEBUG   (   86):          #13  pc 00060e72  /system/lib/libdvm.so
    F/DEBUG   (   86):          #13  pc 00060e72  /system/lib/libdvm.so
    I/DEBUG   (   86):          #14  pc 0001bd58  /system/lib/libdvm.so
    F/DEBUG   (   86):          #14  pc 0001bd58  /system/lib/libdvm.so
    I/DEBUG   (   86):          #15  pc 00022550  /system/lib/libdvm.so
    F/DEBUG   (   86):          #15  pc 00022550  /system/lib/libdvm.so
    I/DEBUG   (   86):          #16  pc 000213f0  /system/lib/libdvm.so
    F/DEBUG   (   86):          #16  pc 000213f0  /system/lib/libdvm.so
    I/DEBUG   (   86):          #17  pc 00058a90  /system/lib/libdvm.so
    F/DEBUG   (   86):          #17  pc 00058a90  /system/lib/libdvm.so
    I/DEBUG   (   86):          #18  pc 0004525e  /system/lib/libdvm.so
    F/DEBUG   (   86):          #18  pc 0004525e  /system/lib/libdvm.so
    I/DEBUG   (   86):          #19  pc 0002e574  /system/lib/libandroid_runtime.so
    F/DEBUG   (   86):          #19  pc 0002e574  /system/lib/libandroid_runtime.so
    I/DEBUG   (   86):          #20  pc 0002f5f6  /system/lib/libandroid_runtime.so
    F/DEBUG   (   86):          #20  pc 0002f5f6  /system/lib/libandroid_runtime.so
    I/DEBUG   (   86):          #21  pc 00008ca8  /system/bin/app_process
    F/DEBUG   (   86):          #21  pc 00008ca8  /system/bin/app_process
    I/DEBUG   (   86):          #22  pc 0000d3d0  /system/lib/libc.so
    F/DEBUG   (   86):          #22  pc 0000d3d0  /system/lib/libc.so
    I/DEBUG   (   86):
    I/DEBUG   (   86): code around pc:
    I/DEBUG   (   86): afd0c300 19d94f56 42ba690f 80a4f0c0 94001814
    I/DEBUG   (   86): afd0c310 f08042a2 68d1809f 42916994 6895d00e
    I/DEBUG   (   86): afd0c320 429668ee 8096f040 4296688e 8092f040
    I/DEBUG   (   86): afd0c330 bf2442bd 608d60e9 e08bd21b b1116951
    I/DEBUG   (   86): afd0c340 0514f102 6911e007 f102b191 e0020510
    I/DEBUG   (   86):
    I/DEBUG   (   86): code around lr:
    I/DEBUG   (   86): afd0c610 60f11008 f8c1608e 4e31c00c f10319a1
    I/DEBUG   (   86): afd0c620 608a0608 e04b614d b1b2684a f7ff4628
    I/DEBUG   (   86): afd0c630 e00ffe23 0f41f115 f04fbf88 d80c35ff
    I/DEBUG   (   86): afd0c640 350b4927 0507f025 68431860 4628b12b
    I/DEBUG   (   86): afd0c650 fc1cf7ff 28004606 4e21d132 689119a2
    I/DEBUG   (   86):
    I/DEBUG   (   86): stack:
    I/DEBUG   (   86):     be979408  000001e0  
    I/DEBUG   (   86):     be97940c  be979494  [stack]
    I/DEBUG   (   86):     be979410  be979438  [stack]
    I/DEBUG   (   86):     be979414  be979478  [stack]
    I/DEBUG   (   86):     be979418  0012f484  [heap]
    I/DEBUG   (   86):     be97941c  be979428  [stack]
    I/DEBUG   (   86):     be979420  00000000  
    I/DEBUG   (   86):     be979424  ab163cec  /system/lib/libskia.so
    I/DEBUG   (   86):     be979428  3f800000  
    I/DEBUG   (   86):     be97942c  80000000  /system/lib/libicudata.so
    I/DEBUG   (   86):     be979430  00000000  
    I/DEBUG   (   86):     be979434  80000000  /system/lib/libicudata.so
    I/DEBUG   (   86):     be979438  3f800000  
    I/DEBUG   (   86):     be97943c  00000000  
    I/DEBUG   (   86):     be979440  df002777  
    I/DEBUG   (   86):     be979444  e3a070ad  
    I/DEBUG   (   86): #00 be979448  0024fd18  [heap]
    I/DEBUG   (   86):     be97944c  afd4372c  /system/lib/libc.so
    I/DEBUG   (   86):     be979450  000000c5  
    I/DEBUG   (   86):     be979454  afd42328  /system/lib/libc.so
    I/DEBUG   (   86):     be979458  00000070  
    I/DEBUG   (   86):     be97945c  0000062c  
    I/DEBUG   (   86):     be979460  00000003  
    I/DEBUG   (   86):     be979464  afd0c633  /system/lib/libc.so
    I/DEBUG   (   86): #01 be979468  be9794c8  [stack]
    I/DEBUG   (   86):     be97946c  00000000  
    I/DEBUG   (   86):     be979470  002576bc  [heap]
    I/DEBUG   (   86):     be979474  ab163d2c  /system/lib/libskia.so
    I/DEBUG   (   86):     be979478  00000000  
    I/DEBUG   (   86):     be97947c  00000000  
    I/DEBUG   (   86):     be979480  44480000  /system/framework/framework-res.apk
    I/DEBUG   (   86):     be979484  00000068  
    I/DEBUG   (   86):     be979488  00000002  
    I/DEBUG   (   86):     be97948c  00000068  
    I/DEBUG   (   86):     be979490  00000003  
    I/DEBUG   (   86):     be979494  afd0cd41  /system/lib/libc.so
    E/Parcel  (  841): Reading a NULL string not supported here.

    Can any plz suggest me where I'm doing wrong.

  • H.264 muxed to MP4 using libavformat not playing back

    14 mai 2015, par Brad Mitchell

    I am trying to mux H.264 data into a MP4 file. There appear to be no errors in saving this H.264 Annex B data out to an MP4 file, but the file fails to playback.

    I’ve done a binary comparison on the files and the issue seems to be somewhere in what is being written to the footer (trailer) of the MP4 file.

    I suspect it has to be something with the way the stream is being created or something.

    Init :

    AVOutputFormat* fmt = av_guess_format( 0, "out.mp4", 0 );
    oc = avformat_alloc_context();
    oc->oformat = fmt;
    strcpy(oc->filename, filename);

    Part of this prototype app I have is creating a png file for each IFrame. So when the first IFrame is encountered, I create the video stream and write the av header etc :

    void addVideoStream(AVCodecContext* decoder)
    {
       videoStream = av_new_stream(oc, 0);
       if (!videoStream)
       {
            cout &lt;&lt; "ERROR creating video stream" &lt;&lt; endl;
            return;        
       }
       vi = videoStream->index;    
       videoContext = videoStream->codec;      
       videoContext->codec_type = AVMEDIA_TYPE_VIDEO;
       videoContext->codec_id = decoder->codec_id;
       videoContext->bit_rate = 512000;
       videoContext->width = decoder->width;
       videoContext->height = decoder->height;
       videoContext->time_base.den = 25;
       videoContext->time_base.num = 1;    
       videoContext->gop_size = decoder->gop_size;
       videoContext->pix_fmt = decoder->pix_fmt;      

       if (oc->oformat->flags &amp; AVFMT_GLOBALHEADER)
           videoContext->flags |= CODEC_FLAG_GLOBAL_HEADER;

       av_dump_format(oc, 0, filename, 1);

       if (!(oc->oformat->flags &amp; AVFMT_NOFILE))
       {
           if (avio_open(&amp;oc->pb, filename, AVIO_FLAG_WRITE) &lt; 0) {
           cout &lt;&lt; "Error opening file" &lt;&lt; endl;
       }
       avformat_write_header(oc, NULL);
    }

    I write packets out :

    unsigned char* data = block->getData();
    unsigned char videoFrameType = data[4];
    int dataLen = block->getDataLen();

    // store pps
    if (videoFrameType == 0x68)
    {
       if (ppsFrame != NULL)
       {
           delete ppsFrame; ppsFrameLength = 0; ppsFrame = NULL;
       }
       ppsFrameLength = block->getDataLen();
       ppsFrame = new unsigned char[ppsFrameLength];
       memcpy(ppsFrame, block->getData(), ppsFrameLength);
    }
    else if (videoFrameType == 0x67)
    {
       // sps
       if (spsFrame != NULL)
       {
           delete spsFrame; spsFrameLength = 0; spsFrame = NULL;
    }
       spsFrameLength = block->getDataLen();
       spsFrame = new unsigned char[spsFrameLength];
       memcpy(spsFrame, block->getData(), spsFrameLength);                
    }                                          

    if (videoFrameType == 0x65 || videoFrameType == 0x41)
    {
       videoFrameNumber++;
    }
    if (videoFrameType == 0x65)
    {
       decodeIFrame(videoFrameNumber, spsFrame, spsFrameLength, ppsFrame, ppsFrameLength, data, dataLen);
    }

    if (videoStream != NULL)
    {
       AVPacket pkt = { 0 };
       av_init_packet(&amp;pkt);
       pkt.stream_index = vi;
       pkt.flags = 0;                      
       pkt.pts = pkt.dts = 0;                                  

       if (videoFrameType == 0x65)
       {
           // combine the SPS PPS &amp; I frames together
           pkt.flags |= AV_PKT_FLAG_KEY;                                                  
           unsigned char* videoFrame = new unsigned char[spsFrameLength+ppsFrameLength+dataLen];
           memcpy(videoFrame, spsFrame, spsFrameLength);
           memcpy(&amp;videoFrame[spsFrameLength], ppsFrame, ppsFrameLength);
           memcpy(&amp;videoFrame[spsFrameLength+ppsFrameLength], data, dataLen);

           // overwrite the start code (00 00 00 01 with a 32-bit length)
           setLength(videoFrame, spsFrameLength-4);
           setLength(&amp;videoFrame[spsFrameLength], ppsFrameLength-4);
           setLength(&amp;videoFrame[spsFrameLength+ppsFrameLength], dataLen-4);
           pkt.size = dataLen + spsFrameLength + ppsFrameLength;
           pkt.data = videoFrame;
           av_interleaved_write_frame(oc, &amp;pkt);
           delete videoFrame; videoFrame = NULL;
       }
       else if (videoFrameType != 0x67 &amp;&amp; videoFrameType != 0x68)
       {  
           // Send other frames except pps &amp; sps which are caught and stored                  
           pkt.size = dataLen;
           pkt.data = data;
           setLength(data, dataLen-4);                    
           av_interleaved_write_frame(oc, &amp;pkt);
       }

    Finally to close the file off :

    av_write_trailer(oc);
    int i = 0;
    for (i = 0; i &lt; oc->nb_streams; i++)
    {
       av_freep(&amp;oc->streams[i]->codec);
       av_freep(&amp;oc->streams[i]);      
    }

    if (!(oc->oformat->flags &amp; AVFMT_NOFILE))
    {
       avio_close(oc->pb);
    }
    av_free(oc);

    If I take the H.264 data alone and convert it :

    ffmpeg -i recording.h264 -vcodec copy recording.mp4

    All but the "footer" of the files are the same.

    Output from my program :
    readrec recording.tcp out.mp4
    ** START * 01-03-2013 14:26:01 180000
    Output #0, mp4, to ’out.mp4’ :
    Stream #0:0 : Video : h264, yuv420p, 352x288, q=2-31, 512 kb/s, 90k tbn, 25 tbc
    * END ** 01-03-2013 14:27:01 102000
    Wrote 1499 video frames.

    If I try to convert using ffmpeg the MP4 file created using CODE :

    ffmpeg -i out.mp4 -vcodec copy out2.mp4
    ffmpeg version 0.11.1 Copyright (c) 2000-2012 the FFmpeg developers
         built on Mar  7 2013 12:49:22 with suncc 0x5110
         configuration: --extra-cflags=-KPIC -g --disable-mmx
         --disable-protocol=udp --disable-encoder=nellymoser --cc=cc --cxx=CC
    libavutil      51. 54.100 / 51. 54.100
    libavcodec     54. 23.100 / 54. 23.100
    libavformat    54.  6.100 / 54.  6.100
    libavdevice    54.  0.100 / 54.  0.100
    libavfilter     2. 77.100 /  2. 77.100
    libswscale      2.  1.100 /  2.  1.100
    libswresample   0. 15.100 /  0. 15.100
    h264 @ 12eaac0] no frame!
       Last message repeated 1 times
    [h264 @ 12eaac0] slice type too large (0) at 0 0
    [h264 @ 12eaac0] decode_slice_header error
    [h264 @ 12eaac0] no frame!
       Last message repeated 23 times
    [h264 @ 12eaac0] slice type too large (0) at 0 0
    [h264 @ 12eaac0] decode_slice_header error
    [h264 @ 12eaac0] no frame!
       Last message repeated 74 times
    [h264 @ 12eaac0] slice type too large (0) at 0 0
    [h264 @ 12eaac0] decode_slice_header error
    [h264 @ 12eaac0] no frame!
       Last message repeated 64 times
    [h264 @ 12eaac0] slice type too large (0) at 0 0
    [h264 @ 12eaac0] decode_slice_header error
    [h264 @ 12eaac0] no frame!
       Last message repeated 34 times
    [h264 @ 12eaac0] slice type too large (0) at 0 0
    [h264 @ 12eaac0] decode_slice_header error
    [h264 @ 12eaac0] no frame!
       Last message repeated 49 times
    [h264 @ 12eaac0] slice type too large (0) at 0 0
    [h264 @ 12eaac0] decode_slice_header error
    [h264 @ 12eaac0] no frame!
       Last message repeated 24 times
    [h264 @ 12eaac0] Partitioned H.264 support is incomplete
    [h264 @ 12eaac0] no frame!
       Last message repeated 23 times
    [h264 @ 12eaac0] sps_id out of range
    [h264 @ 12eaac0] no frame!
       Last message repeated 148 times
    [h264 @ 12eaac0] sps_id (32) out of range
       Last message repeated 1 times
    [h264 @ 12eaac0] no frame!
       Last message repeated 33 times
    [h264 @ 12eaac0] slice type too large (0) at 0 0
    [h264 @ 12eaac0] decode_slice_header error
    [h264 @ 12eaac0] no frame!
       Last message repeated 128 times
    [h264 @ 12eaac0] sps_id (32) out of range
       Last message repeated 1 times
    [h264 @ 12eaac0] no frame!
       Last message repeated 3 times
    [h264 @ 12eaac0] slice type too large (0) at 0 0
    [h264 @ 12eaac0] decode_slice_header error
    [h264 @ 12eaac0] no frame!
       Last message repeated 3 times
    [h264 @ 12eaac0] slice type too large (0) at 0 0
    [h264 @ 12eaac0] decode_slice_header error
    [h264 @ 12eaac0] no frame!
       Last message repeated 309 times
    [h264 @ 12eaac0] sps_id (32) out of range
       Last message repeated 1 times
    [h264 @ 12eaac0] no frame!
       Last message repeated 192 times
    [h264 @ 12eaac0] Partitioned H.264 support is incomplete
    [h264 @ 12eaac0] no frame!
       Last message repeated 73 times
    [h264 @ 12eaac0] sps_id (32) out of range
       Last message repeated 1 times
    [h264 @ 12eaac0] no frame!
       Last message repeated 99 times
    [h264 @ 12eaac0] sps_id (32) out of range
       Last message repeated 1 times
    [h264 @ 12eaac0] no frame!
       Last message repeated 197 times
    [mov,mp4,m4a,3gp,3g2,mj2 @ 12e3100] decoding for stream 0 failed
    [mov,mp4,m4a,3gp,3g2,mj2 @ 12e3100] Could not find codec parameters
    (Video: h264 (avc1 / 0x31637661), 393539 kb/s)
    out.mp4: could not find codec parameters

    I really do not know where the issue is, except it has to be something to do with the way the streams are being set up. I’ve looked at bits of code from where other people are doing a similar thing, and tried to use this advice in setting up the streams, but to no avail !


    The final code which gave me a H.264/AAC muxed (synced) file is as follows. First a bit of background information. The data is coming from an IP camera. The data is presented via a 3rd party API as video/audio packets. The video packets are presented as the RTP payload data (no header) and consist of NALU’s that are reconstructed and converted to H.264 video in Annex B format. AAC audio is presented as raw AAC and is converted to adts format to enable playback. These packets have been put into a bitstream format that allows the transmission of the timestamp (64 bit milliseconds since Jan 1 1970) along with a few other things.

    This is more or less a prototype and is not clean in any respects. It probably leaks bad. I do however, hope this helps anyone else out trying to achieve something similar to what I am.

    Globals :

    AVFormatContext* oc = NULL;
    AVCodecContext* videoContext = NULL;
    AVStream* videoStream = NULL;
    AVCodecContext* audioContext = NULL;
    AVStream* audioStream = NULL;
    AVCodec* videoCodec = NULL;
    AVCodec* audioCodec = NULL;
    int vi = 0;  // Video stream
    int ai = 1;  // Audio stream

    uint64_t firstVideoTimeStamp = 0;
    uint64_t firstAudioTimeStamp = 0;
    int audioStartOffset = 0;

    char* filename = NULL;

    Boolean first = TRUE;

    int videoFrameNumber = 0;
    int audioFrameNumber = 0;

    Main :

    int main(int argc, char* argv[])
    {
       if (argc != 3)
       {  
           cout &lt;&lt; argv[0] &lt;&lt; " <stream playback="playback" file="file"> <output mp4="mp4" file="file">" &lt;&lt; endl;
           return 0;
       }
       char* input_stream_file = argv[1];
       filename = argv[2];

       av_register_all();    

       fstream inFile;
       inFile.open(input_stream_file, ios::in);

       // Used to store the latest pps &amp; sps frames
       unsigned char* ppsFrame = NULL;
       int ppsFrameLength = 0;
       unsigned char* spsFrame = NULL;
       int spsFrameLength = 0;

       // Setup MP4 output file
       AVOutputFormat* fmt = av_guess_format( 0, filename, 0 );
       oc = avformat_alloc_context();
       oc->oformat = fmt;
       strcpy(oc->filename, filename);

       // Setup the bitstream filter for AAC in adts format.  Could probably also achieve
       // this by stripping the first 7 bytes!
       AVBitStreamFilterContext* bsfc = av_bitstream_filter_init("aac_adtstoasc");
       if (!bsfc)
       {      
           cout &lt;&lt; "Error creating adtstoasc filter" &lt;&lt; endl;
           return -1;
       }

       while (inFile.good())
       {
           TcpAVDataBlock* block = new TcpAVDataBlock();
           block->readStruct(inFile);
           DateTime dt = block->getTimestampAsDateTime();
           switch (block->getPacketType())
           {
               case TCP_PACKET_H264:
               {      
                   if (firstVideoTimeStamp == 0)
                       firstVideoTimeStamp = block->getTimeStamp();
                   unsigned char* data = block->getData();
                   unsigned char videoFrameType = data[4];
                   int dataLen = block->getDataLen();

                   // pps
                   if (videoFrameType == 0x68)
                   {
                       if (ppsFrame != NULL)
                       {
                           delete ppsFrame; ppsFrameLength = 0;
                           ppsFrame = NULL;
                       }
                       ppsFrameLength = block->getDataLen();
                       ppsFrame = new unsigned char[ppsFrameLength];
                       memcpy(ppsFrame, block->getData(), ppsFrameLength);
                   }
                   else if (videoFrameType == 0x67)
                   {
                       // sps
                       if (spsFrame != NULL)
                       {
                           delete spsFrame; spsFrameLength = 0;
                           spsFrame = NULL;
                       }
                       spsFrameLength = block->getDataLen();
                       spsFrame = new unsigned char[spsFrameLength];
                       memcpy(spsFrame, block->getData(), spsFrameLength);                  
                   }                                          

                   if (videoFrameType == 0x65 || videoFrameType == 0x41)
                   {
                       videoFrameNumber++;
                   }
                   // Extract a thumbnail for each I-Frame
                   if (videoFrameType == 0x65)
                   {
                       decodeIFrame(h264, spsFrame, spsFrameLength, ppsFrame, ppsFrameLength, data, dataLen);
                   }
                   if (videoStream != NULL)
                   {
                       AVPacket pkt = { 0 };
                       av_init_packet(&amp;pkt);
                       pkt.stream_index = vi;
                       pkt.flags = 0;          
                       pkt.pts = videoFrameNumber;
                       pkt.dts = videoFrameNumber;          
                       if (videoFrameType == 0x65)
                       {
                           pkt.flags = 1;                          

                           unsigned char* videoFrame = new unsigned char[spsFrameLength+ppsFrameLength+dataLen];
                           memcpy(videoFrame, spsFrame, spsFrameLength);
                           memcpy(&amp;videoFrame[spsFrameLength], ppsFrame, ppsFrameLength);

                           memcpy(&amp;videoFrame[spsFrameLength+ppsFrameLength], data, dataLen);
                           pkt.data = videoFrame;
                           av_interleaved_write_frame(oc, &amp;pkt);
                           delete videoFrame; videoFrame = NULL;
                       }
                       else if (videoFrameType != 0x67 &amp;&amp; videoFrameType != 0x68)
                       {                      
                           pkt.size = dataLen;
                           pkt.data = data;
                           av_interleaved_write_frame(oc, &amp;pkt);
                       }                      
                   }
                   break;
               }

           case TCP_PACKET_AAC:

               if (firstAudioTimeStamp == 0)
               {
                   firstAudioTimeStamp = block->getTimeStamp();
                   uint64_t millseconds_difference = firstAudioTimeStamp - firstVideoTimeStamp;
                   audioStartOffset = millseconds_difference * 16000 / 1000;
                   cout &lt;&lt; "audio offset: " &lt;&lt; audioStartOffset &lt;&lt; endl;
               }

               if (audioStream != NULL)
               {
                   AVPacket pkt = { 0 };
                   av_init_packet(&amp;pkt);
                   pkt.stream_index = ai;
                   pkt.flags = 1;          
                   pkt.pts = audioFrameNumber*1024;
                   pkt.dts = audioFrameNumber*1024;
                   pkt.data = block->getData();
                   pkt.size = block->getDataLen();
                   pkt.duration = 1024;

                   AVPacket newpacket = pkt;                      
                   int rc = av_bitstream_filter_filter(bsfc, audioContext,
                       NULL,
                       &amp;newpacket.data, &amp;newpacket.size,
                       pkt.data, pkt.size,
                       pkt.flags &amp; AV_PKT_FLAG_KEY);

                   if (rc >= 0)
                   {
                       //cout &lt;&lt; "Write audio frame" &lt;&lt; endl;
                       newpacket.pts = audioFrameNumber*1024;
                       newpacket.dts = audioFrameNumber*1024;
                       audioFrameNumber++;
                       newpacket.duration = 1024;                  

                       av_interleaved_write_frame(oc, &amp;newpacket);
                       av_free_packet(&amp;newpacket);
                   }  
                   else
                   {
                       cout &lt;&lt; "Error filtering aac packet" &lt;&lt; endl;

                   }
               }
               break;

           case TCP_PACKET_START:
               break;

           case TCP_PACKET_END:
               break;
           }
           delete block;
       }
       inFile.close();

       av_write_trailer(oc);
       int i = 0;
       for (i = 0; i &lt; oc->nb_streams; i++)
       {
           av_freep(&amp;oc->streams[i]->codec);
           av_freep(&amp;oc->streams[i]);      
       }

       if (!(oc->oformat->flags &amp; AVFMT_NOFILE))
       {
           avio_close(oc->pb);
       }

       av_free(oc);

       delete spsFrame; spsFrame = NULL;
       delete ppsFrame; ppsFrame = NULL;

       cout &lt;&lt; "Wrote " &lt;&lt; videoFrameNumber &lt;&lt; " video frames." &lt;&lt; endl;

       return 0;
    }
    </output></stream>

    The stream stream/codecs are added and the header is created in a function called addVideoAndAudioStream(). This function is called from decodeIFrame() so there are a few assumptions (which aren’t necessarily good)
    1. A video packet comes first
    2. AAC is present

    The decodeIFrame was kind of a separate prototype by where I was creating a thumbnail for each I Frame. The code to generate thumbnails was from : https://gnunet.org/svn/Extractor/src/plugins/thumbnailffmpeg_extractor.c

    The decodeIFrame function passes an AVCodecContext into addVideoAudioStream :

    void addVideoAndAudioStream(AVCodecContext* decoder = NULL)
    {
       videoStream = av_new_stream(oc, 0);
       if (!videoStream)
       {
           cout &lt;&lt; "ERROR creating video stream" &lt;&lt; endl;
           return;      
       }
       vi = videoStream->index;  
       videoContext = videoStream->codec;      
       videoContext->codec_type = AVMEDIA_TYPE_VIDEO;
       videoContext->codec_id = decoder->codec_id;
       videoContext->bit_rate = 512000;
       videoContext->width = decoder->width;
       videoContext->height = decoder->height;
       videoContext->time_base.den = 25;
       videoContext->time_base.num = 1;
       videoContext->gop_size = decoder->gop_size;
       videoContext->pix_fmt = decoder->pix_fmt;      

       audioStream = av_new_stream(oc, 1);
       if (!audioStream)
       {
           cout &lt;&lt; "ERROR creating audio stream" &lt;&lt; endl;
           return;
       }
       ai = audioStream->index;
       audioContext = audioStream->codec;
       audioContext->codec_type = AVMEDIA_TYPE_AUDIO;
       audioContext->codec_id = CODEC_ID_AAC;
       audioContext->bit_rate = 64000;
       audioContext->sample_rate = 16000;
       audioContext->channels = 1;

       if (oc->oformat->flags &amp; AVFMT_GLOBALHEADER)
       {
           videoContext->flags |= CODEC_FLAG_GLOBAL_HEADER;
           audioContext->flags |= CODEC_FLAG_GLOBAL_HEADER;
       }

       av_dump_format(oc, 0, filename, 1);

       if (!(oc->oformat->flags &amp; AVFMT_NOFILE))
       {
           if (avio_open(&amp;oc->pb, filename, AVIO_FLAG_WRITE) &lt; 0) {
               cout &lt;&lt; "Error opening file" &lt;&lt; endl;
           }
       }

       avformat_write_header(oc, NULL);
    }

    As far as I can tell, a number of assumptions didn’t seem to matter, for example :
    1. Bit Rate. The actual video bit rate was 262k whereas I specified 512kbit
    2. AAC channels. I specified mono, although the actual output was Stereo from memory

    You would still need to know what the frame rate (time base) is for the video & audio.

    Contrary to a lot of other examples, when setting pts & dts on the video packets, it was not playable. I needed to know the time base (25fps) and then set the pts & dts according to that time base, i.e. first frame = 0 (PPS, SPS, I), second frame = 1 (intermediate frame, whatever its called ;)).

    AAC I also had to make the assumption that it was 16000 hz. 1024 samples per AAC packet (You can also have AAC @ 960 samples I think) to determine the audio "offset". I added this to the pts & dts. So the pts/dts are the sample number that it is to played back at. You also need to make sure that the duration of 1024 is set in the packet before writing also.

    I have found additionally today that Annex B isn’t really compatible with any other player so AVCC format should really be used.

    These URLS helped :
    Problem to Decode H264 video over RTP with ffmpeg (libavcodec)
    http://aviadr1.blogspot.com.au/2010/05/h264-extradata-partially-explained-for.html

    When constructing the video stream, I filled out the extradata & extradata_size :

    // Extradata contains PPS &amp; SPS for AVCC format
    int extradata_len = 8 + spsFrameLen-4 + 1 + 2 + ppsFrameLen-4;
    videoContext->extradata = (uint8_t*)av_mallocz(extradata_len);
    videoContext->extradata_size = extradata_len;
    videoContext->extradata[0] = 0x01;
    videoContext->extradata[1] = spsFrame[4+1];
    videoContext->extradata[2] = spsFrame[4+2];
    videoContext->extradata[3] = spsFrame[4+3];
    videoContext->extradata[4] = 0xFC | 3;
    videoContext->extradata[5] = 0xE0 | 1;
    int tmp = spsFrameLen - 4;
    videoContext->extradata[6] = (tmp >> 8) &amp; 0x00ff;
    videoContext->extradata[7] = tmp &amp; 0x00ff;
    int i = 0;
    for (i=0;iextradata[8+i] = spsFrame[4+i];
    videoContext->extradata[8+tmp] = 0x01;
    int tmp2 = ppsFrameLen-4;  
    videoContext->extradata[8+tmp+1] = (tmp2 >> 8) &amp; 0x00ff;
    videoContext->extradata[8+tmp+2] = tmp2 &amp; 0x00ff;
    for (i=0;iextradata[8+tmp+3+i] = ppsFrame[4+i];

    When writing out the frames, don’t prepend the SPS & PPS frames, just write out the I Frame & P frames. In addition, replace the Annex B start code contained in the first 4 bytes (0x00 0x00 0x00 0x01) with the size of the I/P frame.