Recherche avancée

Médias (91)

Autres articles (40)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

Sur d’autres sites (5358)

  • GStreamer with MPEG-TS Video4Linux ATSC/DVB Recording

    21 juin 2013, par Dustin Oprea

    I'm having an impossible time setting up a filtergraph to read from a recording that I made from my DVB video4linux device. Any help would be vastly appreciated.

    How I made the recording :

    To tune the channel :

    azap -c ~/channels.conf "Florida"

    To record the channel :

    cat /dev/dvb/adapter0/dvr0 > /tmp/test

    This is the way that recordings must be made (I can not use any GST DVB plugins to do this for me).

    I used tstools to identify that the recording is a TS stream :

    tstools/bin$ ./stream_type ~/recordings/20130129-202049
    Reading from /home/dustin/recordings/20130129-202049
    It appears to be Transport Stream

    ...but that there are no PAT/PMT frames :

    tstools/bin$ ./tsinfo ~/recordings/20130129-202049
    Reading from /home/dustin/recordings/20130129-202049
    Scanning 10000 TS packets

    Found 0 PAT packets and 0 PMT packets in 10000 TS packets

    I was able to produce a single ES (elementary stream) stream, by running ts2es :

    tstools/bin$ ./ts2es -pid 97 ~/recordings/20130129-202049 ~/recordings/20130129-202049.es
    Reading from /home/dustin/recordings/20130129-202049
    Writing to   /home/dustin/recordings/20130129-202049.es
    Extracting packets for PID 0061 (97)
    !!! 4 bytes ignored at end of file - not enough to make a TS packet
    Extracted 219258 of 248113 TS packets

    I am able to play the ES stream (even though the video is frozen on the first frame) :

    gst-launch-0.10 filesrc location=~/recordings/20130129-202049.es ! decodebin2 ! autovideosink
    gst-launch-0.10 filesrc location=~/recordings/20130129-202049.es ! decodebin2 ! xvimagesink
    gst-launch-0.10 playbin2 uri=file:///home/dustin/recordings/20130129-202049.es

    No matter what I do, though, I can't get the original TS file to open. However, it opens in Mplayer/FFMPEG, perfectly (but not VLC). This is the output of FFMPEG :

    ffmpeg -i 20130129-202049

    ffmpeg version 0.8.5-4:0.8.5-0ubuntu0.12.04.1, Copyright (c) 2000-2012 the Libav developers
     built on Jan 24 2013 18:03:14 with gcc 4.6.3
    *** THIS PROGRAM IS DEPRECATED ***
    This program is only provided for compatibility and will be removed in a future release. Please use avconv instead.
    [mpeg2video @ 0x9be7be0] mpeg_decode_postinit() failure
       Last message repeated 4 times
    [mpegts @ 0x9be3aa0] max_analyze_duration reached
    [mpegts @ 0x9be3aa0] PES packet size mismatch
    Input #0, mpegts, from '20130129-202049':
     Duration: 00:03:39.99, start: 9204.168844, bitrate: 1696 kb/s
       Stream #0.0[0x61]: Video: mpeg2video (Main), yuv420p, 528x480 [PAR 40:33 DAR 4:3], 15000 kb/s, 30.57 fps, 29.97 tbr, 90k tbn, 59.94 tbc
       Stream #0.1[0x64]: Audio: ac3, 48000 Hz, stereo, s16, 192 kb/s
    At least one output file must be specified

    This tells us that the video stream has PID 0x61 (97).

    I have been trying for a few days, so the following are only examples of a couple of attempts. I'll provide the playbin2 example first, since I know that thousands of people will respond, insisting that I just use that. It doesn't work.

    gst-launch-0.10 playbin2 uri=file:///home/dustin/recordings/20130129-202049

    Setting pipeline to PAUSED ...
    Pipeline is PREROLLING ...
    ERROR: from element /GstPlayBin2:playbin20/GstURIDecodeBin:uridecodebin0/GstDecodeBin2:decodebin20/GstMpegTSDemux:mpegtsdemux0: Could not determine type of stream.
    Additional debug info:
    gstmpegtsdemux.c(2931): gst_mpegts_demux_sink_event (): /GstPlayBin2:playbin20/GstURIDecodeBin:uridecodebin0/GstDecodeBin2:decodebin20/GstMpegTSDemux:mpegtsdemux0:
    No valid streams found at EOS
    ERROR: pipeline doesn't want to preroll.
    Setting pipeline to NULL ...
    Freeing pipeline ...

    It fails [probably] because no PID has been specified with which to find the video (the "EOS" error, I think).

    Naturally, I tried the following to start by demuxing the TS format. I believe it's the "es-pids" property that receives the PID in the absence of PMT information (which tstools said there weren't any, above), but I tried "program-number", too, just in case. gst-inspect indicates that one is hex and the other is decimal :

    gst-launch-0.10 -v filesrc location=20130129-202049 ! mpegtsdemux es-pids=0x61 ! fakesink
    gst-launch-0.10 -v filesrc location=20130129-202049 ! mpegtsdemux program-number=97 ! fakesink

    Output :

    gst-launch-0.10 filesrc location=20130129-202049 ! mpegtsdemux es-pids=0x61 ! fakesink
    Setting pipeline to PAUSED ...
    Pipeline is PREROLLING ...
    ERROR: from element /GstPipeline:pipeline0/GstMpegTSDemux:mpegtsdemux0: Could not determine type of stream.
    Additional debug info:
    gstmpegtsdemux.c(2931): gst_mpegts_demux_sink_event (): /GstPipeline:pipeline0/GstMpegTSDemux:mpegtsdemux0:
    No valid streams found at EOS
    ERROR: pipeline doesn't want to preroll.
    Setting pipeline to NULL ...
    Freeing pipeline ...
    dustin@dustinmicro:~/recordings$ gst-launch-0.10 -v filesrc location=20130129-202049 ! mpegtsdemux es-pids=0x61 ! fakesink
    Setting pipeline to PAUSED ...
    Pipeline is PREROLLING ...
    ERROR: from element /GstPipeline:pipeline0/GstMpegTSDemux:mpegtsdemux0: Could not determine type of stream.
    Additional debug info:
    gstmpegtsdemux.c(2931): gst_mpegts_demux_sink_event (): /GstPipeline:pipeline0/GstMpegTSDemux:mpegtsdemux0:
    No valid streams found at EOS
    ERROR: pipeline doesn't want to preroll.
    Setting pipeline to NULL ...
    Freeing pipeline ...
    dustin@dustinmicro:~/recordings$ gst-launch-0.10 -v filesrc location=20130129-202049 ! mpegtsdemux es-pids=0x61 ! fakesink
    Setting pipeline to PAUSED ...
    Pipeline is PREROLLING ...
    ERROR: from element /GstPipeline:pipeline0/GstMpegTSDemux:mpegtsdemux0: Could not determine type of stream.
    Additional debug info:
    gstmpegtsdemux.c(2931): gst_mpegts_demux_sink_event (): /GstPipeline:pipeline0/GstMpegTSDemux:mpegtsdemux0:
    No valid streams found at EOS
    ERROR: pipeline doesn't want to preroll.
    Setting pipeline to NULL ...
    Freeing pipeline ...

    However, when I try mpegpsdemux (for program streams (PS), as opposed to transport streams (TS)), I get further :

    gst-launch-0.10 filesrc location=20130129-202049 ! mpegpsdemux ! fakesink

    Setting pipeline to PAUSED ...
    Pipeline is PREROLLING ...

    (gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_event_new_new_segment_full: assertion `position != -1' failed

    (gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_mini_object_ref: assertion `mini_object != NULL' failed

    (gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_pad_push_event: assertion `event != NULL' failed
    Pipeline is PREROLLED ...
    Setting pipeline to PLAYING ...
    New clock: GstSystemClock

    (gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_event_new_new_segment_full: assertion `position != -1' failed

    (gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_mini_object_ref: assertion `mini_object != NULL' failed

    (gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_pad_push_event: assertion `event != NULL' failed

    (gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_event_new_new_segment_full: assertion `position != -1' failed

    (gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_mini_object_ref: assertion `mini_object != NULL' failed

    (gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_pad_push_event: assertion `event != NULL' failed

    (gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_event_new_new_segment_full: assertion `position != -1' failed

    (gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_mini_object_ref: assertion `mini_object != NULL' failed

    (gst-launch-0.10:14805): GStreamer-CRITICAL **: gst_pad_push_event: assertion `event != NULL' failed


    ...


    Got EOS from element "pipeline0".
    Execution ended after 1654760008 ns.
    Setting pipeline to PAUSED ...
    Setting pipeline to READY ...
    Setting pipeline to NULL ...
    Freeing pipeline ...

    I'll still get the same problem whenever I use the mpegtsdemux, even if it follows mpegpsdemux, above.

    I don't understand this, since I haven't even picked a PID yet.

    What am I doing wrong ?

    Dustin

  • FFMPEG : cannot play MPEG4 video encoded from images. Duration and bitrate undefined

    17 juin 2013, par KaiK

    I've been trying to set a H264 video stream created from images, into an MPEG4 container. I've been able to get the video stream from images successfully. But when muxing it in the container, I must do something wrong because no player is able to reproduce it, despite ffplay - that plays the video until the end and after that, the image gets frozen until the eternity -.

    The ffplay cannot identify Duration neither bitrate, so I supose it might be an issue related with dts and pts, but I've searched about how to solve it with no success.

    Here's the ffplay output :

    ~$ ffplay testContainer.mp4
    ffplay version git-2012-01-31-c673671 Copyright (c) 2003-2012 the FFmpeg developers
     built on Feb  7 2012 20:32:12 with gcc 4.4.3
     configuration: --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable-        libfaac --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-x11grab --enable-libvpx --enable-libmp3lame --enable-debug=3
     libavutil      51. 36.100 / 51. 36.100
     libavcodec     54.  0.102 / 54.  0.102
     libavformat    54.  0.100 / 54.  0.100
     libavdevice    53.  4.100 / 53.  4.100
     libavfilter     2. 60.100 /  2. 60.100
     libswscale      2.  1.100 /  2.  1.100
     libswresample   0.  6.100 /  0.  6.100
     libpostproc    52.  0.100 / 52.  0.100
    [h264 @ 0xa4849c0] max_analyze_duration 5000000 reached at 5000000
    [h264 @ 0xa4849c0] Estimating duration from bitrate, this may be inaccurate
    Input #0, h264, from 'testContainer.mp4':
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: h264 (High), yuv420p, 512x512, 25 fps, 25 tbr, 1200k tbn, 50 tbc
          2.74 A-V:  0.000 fd=   0 aq=    0KB vq=  160KB sq=    0B f=0/0   0/0

    Structure

    My code is C++ styled, so I've a class that handles all the encoding, and then a main that initilize it, passes some images in a bucle, and finally notify the end of the process as following :

    int main (int argc, const char * argv[])
    {

    MyVideoEncoder* videoEncoder = new MyVideoEncoder(512, 512, 512, 512, "output/testContainer.mp4", 25, 20);
    if(!videoEncoder->initWithCodec(MyVideoEncoder::H264))
    {
       std::cout << "something really bad happened. Exit!!" << std::endl;
       exit(-1);
    }

    /* encode 1 second of video */
    for(int i=0;i<228;i++) {

       std::stringstream filepath;
       filepath << "input2/image" << i << ".jpg";

       videoEncoder->encodeFrameFromJPG(const_cast(filepath.str().c_str()));

    }

    videoEncoder->endEncoding();

    }

    Hints

    I've seen a lot of examples about decoding of a video and encoding into another, but no working example of muxing a video from the scratch, so I'm not sure how to proceed with the pts and dts packet values. That's the reason why I suspect the issue must be in the following method :

    bool MyVideoEncoder::encodeImageAsFrame(){
       bool res = false;


       pTempFrame->pts = frameCount * frameRate * 90; //90Hz by the standard for PTS-values
       frameCount++;

       /* encode the image */
       out_size = avcodec_encode_video(pVideoStream->codec, outbuf, outbuf_size, pTempFrame);


       if (out_size > 0) {
           AVPacket pkt;
           av_init_packet(&pkt);
           pkt.pts = pkt.dts = 0;

           if (pVideoStream->codec->coded_frame->pts != AV_NOPTS_VALUE) {
               pkt.pts = av_rescale_q(pVideoStream->codec->coded_frame->pts,
                       pVideoStream->codec->time_base, pVideoStream->time_base);
               pkt.dts = pTempFrame->pts;

           }
           if (pVideoStream->codec->coded_frame->key_frame) {
               pkt.flags |= AV_PKT_FLAG_KEY;
           }
           pkt.stream_index = pVideoStream->index;
           pkt.data = outbuf;
           pkt.size = out_size;

           res = (av_interleaved_write_frame(pFormatContext, &pkt) == 0);
       }


       return res;
    }

    Any help or insight would be appreciated. Thanks in advance !!

    P.S. The rest of the code, where config is done, is the following :

    // MyVideoEncoder.cpp

    #include "MyVideoEncoder.h"
    #include "Image.hpp"
    #include <cstring>
    #include <sstream>
    #include

    #define MAX_AUDIO_PACKET_SIZE (128 * 1024)



    MyVideoEncoder::MyVideoEncoder(int inwidth, int inheight,
           int outwidth, int outheight, char* fileOutput, int framerate,
           int compFactor) {
       inWidth = inwidth;
       inHeight = inheight;
       outWidth = outwidth;
       outHeight = outheight;
       pathToMovie = fileOutput;
       frameRate = framerate;
       compressionFactor = compFactor;
       frameCount = 0;

    }

    MyVideoEncoder::~MyVideoEncoder() {

    }

    bool MyVideoEncoder::initWithCodec(
           MyVideoEncoder::encoderType type) {
       if (!initializeEncoder(type))
           return false;

       if (!configureFrames())
           return false;

       return true;

    }

    bool MyVideoEncoder::encodeFrameFromJPG(char* filepath) {

       setJPEGImage(filepath);
       return encodeImageAsFrame();
    }



    bool MyVideoEncoder::encodeDelayedFrames(){
       bool res = false;

       while(out_size > 0)
       {
           pTempFrame->pts = frameCount * frameRate * 90; //90Hz by the standard for PTS-values
           frameCount++;

           out_size = avcodec_encode_video(pVideoStream->codec, outbuf, outbuf_size, NULL);

           if (out_size > 0)
           {
               AVPacket pkt;
               av_init_packet(&amp;pkt);
               pkt.pts = pkt.dts = 0;

               if (pVideoStream->codec->coded_frame->pts != AV_NOPTS_VALUE) {
                   pkt.pts = av_rescale_q(pVideoStream->codec->coded_frame->pts,
                           pVideoStream->codec->time_base, pVideoStream->time_base);
                   pkt.dts = pTempFrame->pts;
               }
               if (pVideoStream->codec->coded_frame->key_frame) {
                   pkt.flags |= AV_PKT_FLAG_KEY;
               }
               pkt.stream_index = pVideoStream->index;
               pkt.data = outbuf;
               pkt.size = out_size;


               res = (av_interleaved_write_frame(pFormatContext, &amp;pkt) == 0);
           }

       }

       return res;
    }






    void MyVideoEncoder::endEncoding() {
       encodeDelayedFrames();
       closeEncoder();
    }

    bool MyVideoEncoder::setJPEGImage(char* imgFilename) {
       Image* rgbImage = new Image();
       rgbImage->read_jpeg_image(imgFilename);

       bool ret = setImageFromRGBArray(rgbImage->get_data());

       delete rgbImage;

       return ret;
    }

    bool MyVideoEncoder::setImageFromRGBArray(unsigned char* data) {

       memcpy(pFrameRGB->data[0], data, 3 * inWidth * inHeight);

       int ret = sws_scale(img_convert_ctx, pFrameRGB->data, pFrameRGB->linesize,
               0, inHeight, pTempFrame->data, pTempFrame->linesize);

       pFrameRGB->pts++;
       if (ret)
           return true;
       else
           return false;
    }

    bool MyVideoEncoder::initializeEncoder(encoderType type) {

       av_register_all();

       pTempFrame = avcodec_alloc_frame();
       pTempFrame->pts = 0;
       pOutFormat = NULL;
       pFormatContext = NULL;
       pVideoStream = NULL;
       pAudioStream = NULL;
       bool res = false;

       // Create format
       switch (type) {
           case MyVideoEncoder::H264:
               pOutFormat = av_guess_format("h264", NULL, NULL);
               break;
           case MyVideoEncoder::MPEG1:
               pOutFormat = av_guess_format("mpeg", NULL, NULL);
               break;
           default:
               pOutFormat = av_guess_format(NULL, pathToMovie.c_str(), NULL);
               break;
       }

       if (!pOutFormat) {
           pOutFormat = av_guess_format(NULL, pathToMovie.c_str(), NULL);
           if (!pOutFormat) {
               std::cout &lt;&lt; "output format not found" &lt;&lt; std::endl;
               return false;
           }
       }


       // allocate context
       pFormatContext = avformat_alloc_context();
       if(!pFormatContext)
       {
           std::cout &lt;&lt; "cannot alloc format context" &lt;&lt; std::endl;
           return false;
       }

       pFormatContext->oformat = pOutFormat;

       memcpy(pFormatContext->filename, pathToMovie.c_str(), min( (const int) pathToMovie.length(), (const int)sizeof(pFormatContext->filename)));


       //Add video and audio streams
       pVideoStream = AddVideoStream(pFormatContext,
               pOutFormat->video_codec);

       // Set the output parameters
       av_dump_format(pFormatContext, 0, pathToMovie.c_str(), 1);

       // Open Video stream
       if (pVideoStream) {
           res = openVideo(pFormatContext, pVideoStream);
       }


       if (res &amp;&amp; !(pOutFormat->flags &amp; AVFMT_NOFILE)) {
           if (avio_open(&amp;pFormatContext->pb, pathToMovie.c_str(), AVIO_FLAG_WRITE) &lt; 0) {
               res = false;
               std::cout &lt;&lt; "Cannot open output file" &lt;&lt; std::endl;
           }
       }

       if (res) {
           avformat_write_header(pFormatContext,NULL);
       }
       else{
           freeMemory();
           std::cout &lt;&lt; "Cannot init encoder" &lt;&lt; std::endl;
       }


       return res;

    }



    AVStream *MyVideoEncoder::AddVideoStream(AVFormatContext *pContext, CodecID codec_id)
    {
     AVCodecContext *pCodecCxt = NULL;
     AVStream *st    = NULL;

     st = avformat_new_stream(pContext, NULL);
     if (!st)
     {
         std::cout &lt;&lt; "Cannot add new video stream" &lt;&lt; std::endl;
         return NULL;
     }
     st->id = 0;

     pCodecCxt = st->codec;
     pCodecCxt->codec_id = (CodecID)codec_id;
     pCodecCxt->codec_type = AVMEDIA_TYPE_VIDEO;
     pCodecCxt->frame_number = 0;


     // Put sample parameters.
     pCodecCxt->bit_rate = outWidth * outHeight * 3 * frameRate/ compressionFactor;

     pCodecCxt->width  = outWidth;
     pCodecCxt->height = outHeight;

     /* frames per second */
     pCodecCxt->time_base= (AVRational){1,frameRate};

     /* pixel format must be YUV */
     pCodecCxt->pix_fmt = PIX_FMT_YUV420P;


     if (pCodecCxt->codec_id == CODEC_ID_H264)
     {
         av_opt_set(pCodecCxt->priv_data, "preset", "slow", 0);
         av_opt_set(pCodecCxt->priv_data, "vprofile", "baseline", 0);
         pCodecCxt->max_b_frames = 16;
     }
     if (pCodecCxt->codec_id == CODEC_ID_MPEG1VIDEO)
     {
         pCodecCxt->mb_decision = 1;
     }

     if(pContext->oformat->flags &amp; AVFMT_GLOBALHEADER)
     {
         pCodecCxt->flags |= CODEC_FLAG_GLOBAL_HEADER;
     }

     pCodecCxt->coder_type = 1;  // coder = 1
     pCodecCxt->flags|=CODEC_FLAG_LOOP_FILTER;   // flags=+loop
     pCodecCxt->me_range = 16;   // me_range=16
     pCodecCxt->gop_size = 50;  // g=250
     pCodecCxt->keyint_min = 25; // keyint_min=25


     return st;
    }


    bool MyVideoEncoder::openVideo(AVFormatContext *oc, AVStream *pStream)
    {
       AVCodec *pCodec;
       AVCodecContext *pContext;

       pContext = pStream->codec;

       // Find the video encoder.
       pCodec = avcodec_find_encoder(pContext->codec_id);
       if (!pCodec)
       {
           std::cout &lt;&lt; "Cannot found video codec" &lt;&lt; std::endl;
           return false;
       }

       // Open the codec.
       if (avcodec_open2(pContext, pCodec, NULL) &lt; 0)
       {
           std::cout &lt;&lt; "Cannot open video codec" &lt;&lt; std::endl;
           return false;
       }


       return true;
    }



    bool MyVideoEncoder::configureFrames() {

       /* alloc image and output buffer */
       outbuf_size = outWidth*outHeight*3;
       outbuf = (uint8_t*) malloc(outbuf_size);

       av_image_alloc(pTempFrame->data, pTempFrame->linesize, pVideoStream->codec->width,
               pVideoStream->codec->height, pVideoStream->codec->pix_fmt, 1);

       //Alloc RGB temp frame
       pFrameRGB = avcodec_alloc_frame();
       if (pFrameRGB == NULL)
           return false;
       avpicture_alloc((AVPicture *) pFrameRGB, PIX_FMT_RGB24, inWidth, inHeight);

       pFrameRGB->pts = 0;

       //Set SWS context to convert from RGB images to YUV images
       if (img_convert_ctx == NULL) {
           img_convert_ctx = sws_getContext(inWidth, inHeight, PIX_FMT_RGB24,
                   outWidth, outHeight, pVideoStream->codec->pix_fmt, /*SWS_BICUBIC*/
                   SWS_FAST_BILINEAR, NULL, NULL, NULL);
           if (img_convert_ctx == NULL) {
               fprintf(stderr, "Cannot initialize the conversion context!\n");
               return false;
           }
       }

       return true;

    }

    void MyVideoEncoder::closeEncoder() {
       av_write_frame(pFormatContext, NULL);
       av_write_trailer(pFormatContext);
       freeMemory();
    }


    void MyVideoEncoder::freeMemory()
    {
     bool res = true;

     if (pFormatContext)
     {
       // close video stream
       if (pVideoStream)
       {
         closeVideo(pFormatContext, pVideoStream);
       }

       // Free the streams.
       for(size_t i = 0; i &lt; pFormatContext->nb_streams; i++)
       {
         av_freep(&amp;pFormatContext->streams[i]->codec);
         av_freep(&amp;pFormatContext->streams[i]);
       }

       if (!(pFormatContext->flags &amp; AVFMT_NOFILE) &amp;&amp; pFormatContext->pb)
       {
         avio_close(pFormatContext->pb);
       }

       // Free the stream.
       av_free(pFormatContext);
       pFormatContext = NULL;
     }
    }

    void MyVideoEncoder::closeVideo(AVFormatContext *pContext, AVStream *pStream)
    {
     avcodec_close(pStream->codec);
     if (pTempFrame)
     {
       if (pTempFrame->data)
       {
         av_free(pTempFrame->data[0]);
         pTempFrame->data[0] = NULL;
       }
       av_free(pTempFrame);
       pTempFrame = NULL;
     }

     if (pFrameRGB)
     {
       if (pFrameRGB->data)
       {
         av_free(pFrameRGB->data[0]);
         pFrameRGB->data[0] = NULL;
       }
       av_free(pFrameRGB);
       pFrameRGB = NULL;
     }

    }
    </sstream></cstring>
  • FFMPEG : cannot play MPEG4 video encoded from images. Duration and bitrate undefined

    17 juin 2013, par KaiK

    I've been trying to set a H264 video stream created from images, into an MPEG4 container. I've been able to get the video stream from images successfully. But when muxing it in the container, I must do something wrong because no player is able to reproduce it, despite ffplay - that plays the video until the end and after that, the image gets frozen until the eternity -.

    The ffplay cannot identify Duration neither bitrate, so I supose it might be an issue related with dts and pts, but I've searched about how to solve it with no success.

    Here's the ffplay output :

    ~$ ffplay testContainer.mp4
    ffplay version git-2012-01-31-c673671 Copyright (c) 2003-2012 the FFmpeg developers
     built on Feb  7 2012 20:32:12 with gcc 4.4.3
     configuration: --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable-        libfaac --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-x11grab --enable-libvpx --enable-libmp3lame --enable-debug=3
     libavutil      51. 36.100 / 51. 36.100
     libavcodec     54.  0.102 / 54.  0.102
     libavformat    54.  0.100 / 54.  0.100
     libavdevice    53.  4.100 / 53.  4.100
     libavfilter     2. 60.100 /  2. 60.100
     libswscale      2.  1.100 /  2.  1.100
     libswresample   0.  6.100 /  0.  6.100
     libpostproc    52.  0.100 / 52.  0.100
    [h264 @ 0xa4849c0] max_analyze_duration 5000000 reached at 5000000
    [h264 @ 0xa4849c0] Estimating duration from bitrate, this may be inaccurate
    Input #0, h264, from &#39;testContainer.mp4&#39;:
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: h264 (High), yuv420p, 512x512, 25 fps, 25 tbr, 1200k tbn, 50 tbc
          2.74 A-V:  0.000 fd=   0 aq=    0KB vq=  160KB sq=    0B f=0/0   0/0

    Structure

    My code is C++ styled, so I've a class that handles all the encoding, and then a main that initilize it, passes some images in a bucle, and finally notify the end of the process as following :

    int main (int argc, const char * argv[])
    {

    MyVideoEncoder* videoEncoder = new MyVideoEncoder(512, 512, 512, 512, "output/testContainer.mp4", 25, 20);
    if(!videoEncoder->initWithCodec(MyVideoEncoder::H264))
    {
       std::cout &lt;&lt; "something really bad happened. Exit!!" &lt;&lt; std::endl;
       exit(-1);
    }

    /* encode 1 second of video */
    for(int i=0;i&lt;228;i++) {

       std::stringstream filepath;
       filepath &lt;&lt; "input2/image" &lt;&lt; i &lt;&lt; ".jpg";

       videoEncoder->encodeFrameFromJPG(const_cast(filepath.str().c_str()));

    }

    videoEncoder->endEncoding();

    }

    Hints

    I've seen a lot of examples about decoding of a video and encoding into another, but no working example of muxing a video from the scratch, so I'm not sure how to proceed with the pts and dts packet values. That's the reason why I suspect the issue must be in the following method :

    bool MyVideoEncoder::encodeImageAsFrame(){
       bool res = false;


       pTempFrame->pts = frameCount * frameRate * 90; //90Hz by the standard for PTS-values
       frameCount++;

       /* encode the image */
       out_size = avcodec_encode_video(pVideoStream->codec, outbuf, outbuf_size, pTempFrame);


       if (out_size > 0) {
           AVPacket pkt;
           av_init_packet(&amp;pkt);
           pkt.pts = pkt.dts = 0;

           if (pVideoStream->codec->coded_frame->pts != AV_NOPTS_VALUE) {
               pkt.pts = av_rescale_q(pVideoStream->codec->coded_frame->pts,
                       pVideoStream->codec->time_base, pVideoStream->time_base);
               pkt.dts = pTempFrame->pts;

           }
           if (pVideoStream->codec->coded_frame->key_frame) {
               pkt.flags |= AV_PKT_FLAG_KEY;
           }
           pkt.stream_index = pVideoStream->index;
           pkt.data = outbuf;
           pkt.size = out_size;

           res = (av_interleaved_write_frame(pFormatContext, &amp;pkt) == 0);
       }


       return res;
    }

    Any help or insight would be appreciated. Thanks in advance !!

    P.S. The rest of the code, where config is done, is the following :

    // MyVideoEncoder.cpp

    #include "MyVideoEncoder.h"
    #include "Image.hpp"
    #include <cstring>
    #include <sstream>
    #include

    #define MAX_AUDIO_PACKET_SIZE (128 * 1024)



    MyVideoEncoder::MyVideoEncoder(int inwidth, int inheight,
           int outwidth, int outheight, char* fileOutput, int framerate,
           int compFactor) {
       inWidth = inwidth;
       inHeight = inheight;
       outWidth = outwidth;
       outHeight = outheight;
       pathToMovie = fileOutput;
       frameRate = framerate;
       compressionFactor = compFactor;
       frameCount = 0;

    }

    MyVideoEncoder::~MyVideoEncoder() {

    }

    bool MyVideoEncoder::initWithCodec(
           MyVideoEncoder::encoderType type) {
       if (!initializeEncoder(type))
           return false;

       if (!configureFrames())
           return false;

       return true;

    }

    bool MyVideoEncoder::encodeFrameFromJPG(char* filepath) {

       setJPEGImage(filepath);
       return encodeImageAsFrame();
    }



    bool MyVideoEncoder::encodeDelayedFrames(){
       bool res = false;

       while(out_size > 0)
       {
           pTempFrame->pts = frameCount * frameRate * 90; //90Hz by the standard for PTS-values
           frameCount++;

           out_size = avcodec_encode_video(pVideoStream->codec, outbuf, outbuf_size, NULL);

           if (out_size > 0)
           {
               AVPacket pkt;
               av_init_packet(&amp;pkt);
               pkt.pts = pkt.dts = 0;

               if (pVideoStream->codec->coded_frame->pts != AV_NOPTS_VALUE) {
                   pkt.pts = av_rescale_q(pVideoStream->codec->coded_frame->pts,
                           pVideoStream->codec->time_base, pVideoStream->time_base);
                   pkt.dts = pTempFrame->pts;
               }
               if (pVideoStream->codec->coded_frame->key_frame) {
                   pkt.flags |= AV_PKT_FLAG_KEY;
               }
               pkt.stream_index = pVideoStream->index;
               pkt.data = outbuf;
               pkt.size = out_size;


               res = (av_interleaved_write_frame(pFormatContext, &amp;pkt) == 0);
           }

       }

       return res;
    }






    void MyVideoEncoder::endEncoding() {
       encodeDelayedFrames();
       closeEncoder();
    }

    bool MyVideoEncoder::setJPEGImage(char* imgFilename) {
       Image* rgbImage = new Image();
       rgbImage->read_jpeg_image(imgFilename);

       bool ret = setImageFromRGBArray(rgbImage->get_data());

       delete rgbImage;

       return ret;
    }

    bool MyVideoEncoder::setImageFromRGBArray(unsigned char* data) {

       memcpy(pFrameRGB->data[0], data, 3 * inWidth * inHeight);

       int ret = sws_scale(img_convert_ctx, pFrameRGB->data, pFrameRGB->linesize,
               0, inHeight, pTempFrame->data, pTempFrame->linesize);

       pFrameRGB->pts++;
       if (ret)
           return true;
       else
           return false;
    }

    bool MyVideoEncoder::initializeEncoder(encoderType type) {

       av_register_all();

       pTempFrame = avcodec_alloc_frame();
       pTempFrame->pts = 0;
       pOutFormat = NULL;
       pFormatContext = NULL;
       pVideoStream = NULL;
       pAudioStream = NULL;
       bool res = false;

       // Create format
       switch (type) {
           case MyVideoEncoder::H264:
               pOutFormat = av_guess_format("h264", NULL, NULL);
               break;
           case MyVideoEncoder::MPEG1:
               pOutFormat = av_guess_format("mpeg", NULL, NULL);
               break;
           default:
               pOutFormat = av_guess_format(NULL, pathToMovie.c_str(), NULL);
               break;
       }

       if (!pOutFormat) {
           pOutFormat = av_guess_format(NULL, pathToMovie.c_str(), NULL);
           if (!pOutFormat) {
               std::cout &lt;&lt; "output format not found" &lt;&lt; std::endl;
               return false;
           }
       }


       // allocate context
       pFormatContext = avformat_alloc_context();
       if(!pFormatContext)
       {
           std::cout &lt;&lt; "cannot alloc format context" &lt;&lt; std::endl;
           return false;
       }

       pFormatContext->oformat = pOutFormat;

       memcpy(pFormatContext->filename, pathToMovie.c_str(), min( (const int) pathToMovie.length(), (const int)sizeof(pFormatContext->filename)));


       //Add video and audio streams
       pVideoStream = AddVideoStream(pFormatContext,
               pOutFormat->video_codec);

       // Set the output parameters
       av_dump_format(pFormatContext, 0, pathToMovie.c_str(), 1);

       // Open Video stream
       if (pVideoStream) {
           res = openVideo(pFormatContext, pVideoStream);
       }


       if (res &amp;&amp; !(pOutFormat->flags &amp; AVFMT_NOFILE)) {
           if (avio_open(&amp;pFormatContext->pb, pathToMovie.c_str(), AVIO_FLAG_WRITE) &lt; 0) {
               res = false;
               std::cout &lt;&lt; "Cannot open output file" &lt;&lt; std::endl;
           }
       }

       if (res) {
           avformat_write_header(pFormatContext,NULL);
       }
       else{
           freeMemory();
           std::cout &lt;&lt; "Cannot init encoder" &lt;&lt; std::endl;
       }


       return res;

    }



    AVStream *MyVideoEncoder::AddVideoStream(AVFormatContext *pContext, CodecID codec_id)
    {
     AVCodecContext *pCodecCxt = NULL;
     AVStream *st    = NULL;

     st = avformat_new_stream(pContext, NULL);
     if (!st)
     {
         std::cout &lt;&lt; "Cannot add new video stream" &lt;&lt; std::endl;
         return NULL;
     }
     st->id = 0;

     pCodecCxt = st->codec;
     pCodecCxt->codec_id = (CodecID)codec_id;
     pCodecCxt->codec_type = AVMEDIA_TYPE_VIDEO;
     pCodecCxt->frame_number = 0;


     // Put sample parameters.
     pCodecCxt->bit_rate = outWidth * outHeight * 3 * frameRate/ compressionFactor;

     pCodecCxt->width  = outWidth;
     pCodecCxt->height = outHeight;

     /* frames per second */
     pCodecCxt->time_base= (AVRational){1,frameRate};

     /* pixel format must be YUV */
     pCodecCxt->pix_fmt = PIX_FMT_YUV420P;


     if (pCodecCxt->codec_id == CODEC_ID_H264)
     {
         av_opt_set(pCodecCxt->priv_data, "preset", "slow", 0);
         av_opt_set(pCodecCxt->priv_data, "vprofile", "baseline", 0);
         pCodecCxt->max_b_frames = 16;
     }
     if (pCodecCxt->codec_id == CODEC_ID_MPEG1VIDEO)
     {
         pCodecCxt->mb_decision = 1;
     }

     if(pContext->oformat->flags &amp; AVFMT_GLOBALHEADER)
     {
         pCodecCxt->flags |= CODEC_FLAG_GLOBAL_HEADER;
     }

     pCodecCxt->coder_type = 1;  // coder = 1
     pCodecCxt->flags|=CODEC_FLAG_LOOP_FILTER;   // flags=+loop
     pCodecCxt->me_range = 16;   // me_range=16
     pCodecCxt->gop_size = 50;  // g=250
     pCodecCxt->keyint_min = 25; // keyint_min=25


     return st;
    }


    bool MyVideoEncoder::openVideo(AVFormatContext *oc, AVStream *pStream)
    {
       AVCodec *pCodec;
       AVCodecContext *pContext;

       pContext = pStream->codec;

       // Find the video encoder.
       pCodec = avcodec_find_encoder(pContext->codec_id);
       if (!pCodec)
       {
           std::cout &lt;&lt; "Cannot found video codec" &lt;&lt; std::endl;
           return false;
       }

       // Open the codec.
       if (avcodec_open2(pContext, pCodec, NULL) &lt; 0)
       {
           std::cout &lt;&lt; "Cannot open video codec" &lt;&lt; std::endl;
           return false;
       }


       return true;
    }



    bool MyVideoEncoder::configureFrames() {

       /* alloc image and output buffer */
       outbuf_size = outWidth*outHeight*3;
       outbuf = (uint8_t*) malloc(outbuf_size);

       av_image_alloc(pTempFrame->data, pTempFrame->linesize, pVideoStream->codec->width,
               pVideoStream->codec->height, pVideoStream->codec->pix_fmt, 1);

       //Alloc RGB temp frame
       pFrameRGB = avcodec_alloc_frame();
       if (pFrameRGB == NULL)
           return false;
       avpicture_alloc((AVPicture *) pFrameRGB, PIX_FMT_RGB24, inWidth, inHeight);

       pFrameRGB->pts = 0;

       //Set SWS context to convert from RGB images to YUV images
       if (img_convert_ctx == NULL) {
           img_convert_ctx = sws_getContext(inWidth, inHeight, PIX_FMT_RGB24,
                   outWidth, outHeight, pVideoStream->codec->pix_fmt, /*SWS_BICUBIC*/
                   SWS_FAST_BILINEAR, NULL, NULL, NULL);
           if (img_convert_ctx == NULL) {
               fprintf(stderr, "Cannot initialize the conversion context!\n");
               return false;
           }
       }

       return true;

    }

    void MyVideoEncoder::closeEncoder() {
       av_write_frame(pFormatContext, NULL);
       av_write_trailer(pFormatContext);
       freeMemory();
    }


    void MyVideoEncoder::freeMemory()
    {
     bool res = true;

     if (pFormatContext)
     {
       // close video stream
       if (pVideoStream)
       {
         closeVideo(pFormatContext, pVideoStream);
       }

       // Free the streams.
       for(size_t i = 0; i &lt; pFormatContext->nb_streams; i++)
       {
         av_freep(&amp;pFormatContext->streams[i]->codec);
         av_freep(&amp;pFormatContext->streams[i]);
       }

       if (!(pFormatContext->flags &amp; AVFMT_NOFILE) &amp;&amp; pFormatContext->pb)
       {
         avio_close(pFormatContext->pb);
       }

       // Free the stream.
       av_free(pFormatContext);
       pFormatContext = NULL;
     }
    }

    void MyVideoEncoder::closeVideo(AVFormatContext *pContext, AVStream *pStream)
    {
     avcodec_close(pStream->codec);
     if (pTempFrame)
     {
       if (pTempFrame->data)
       {
         av_free(pTempFrame->data[0]);
         pTempFrame->data[0] = NULL;
       }
       av_free(pTempFrame);
       pTempFrame = NULL;
     }

     if (pFrameRGB)
     {
       if (pFrameRGB->data)
       {
         av_free(pFrameRGB->data[0]);
         pFrameRGB->data[0] = NULL;
       }
       av_free(pFrameRGB);
       pFrameRGB = NULL;
     }

    }
    </sstream></cstring>