Recherche avancée

Médias (0)

Mot : - Tags -/diogene

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (106)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Librairies et logiciels spécifiques aux médias

    10 décembre 2010, par

    Pour un fonctionnement correct et optimal, plusieurs choses sont à prendre en considération.
    Il est important, après avoir installé apache2, mysql et php5, d’installer d’autres logiciels nécessaires dont les installations sont décrites dans les liens afférants. Un ensemble de librairies multimedias (x264, libtheora, libvpx) utilisées pour l’encodage et le décodage des vidéos et sons afin de supporter le plus grand nombre de fichiers possibles. Cf. : ce tutoriel ; FFMpeg avec le maximum de décodeurs et (...)

Sur d’autres sites (6330)

  • streaming H.264 over RTP with libavformat

    16 avril 2012, par Jacob Peddicord

    I've been trying over the past week to implement H.264 streaming over RTP, using x264 as an encoder and libavformat to pack and send the stream. Problem is, as far as I can tell it's not working correctly.

    Right now I'm just encoding random data (x264_picture_alloc) and extracting NAL frames from libx264. This is fairly simple :

    x264_picture_t pic_out;
    x264_nal_t* nals;
    int num_nals;
    int frame_size = x264_encoder_encode(this->encoder, &nals, &num_nals, this->pic_in, &pic_out);

    if (frame_size <= 0)
    {
       return frame_size;
    }

    // push NALs into the queue
    for (int i = 0; i < num_nals; i++)
    {
       // create a NAL storage unit
       NAL nal;
       nal.size = nals[i].i_payload;
       nal.payload = new uint8_t[nal.size];
       memcpy(nal.payload, nals[i].p_payload, nal.size);

       // push the storage into the NAL queue
       {
           // lock and push the NAL to the queue
           boost::mutex::scoped_lock lock(this->nal_lock);
           this->nal_queue.push(nal);
       }
    }

    nal_queue is used for safely passing frames over to a Streamer class which will then send the frames out. Right now it's not threaded, as I'm just testing to try to get this to work. Before encoding individual frames, I've made sure to initialize the encoder.

    But I don't believe x264 is the issue, as I can see frame data in the NALs it returns back.
    Streaming the data is accomplished with libavformat, which is first initialized in a Streamer class :

    Streamer::Streamer(Encoder* encoder, string rtp_address, int rtp_port, int width, int height, int fps, int bitrate)
    {
       this->encoder = encoder;

       // initalize the AV context
       this->ctx = avformat_alloc_context();
       if (!this->ctx)
       {
           throw runtime_error("Couldn't initalize AVFormat output context");
       }

       // get the output format
       this->fmt = av_guess_format("rtp", NULL, NULL);
       if (!this->fmt)
       {
           throw runtime_error("Unsuitable output format");
       }
       this->ctx->oformat = this->fmt;

       // try to open the RTP stream
       snprintf(this->ctx->filename, sizeof(this->ctx->filename), "rtp://%s:%d", rtp_address.c_str(), rtp_port);
       if (url_fopen(&(this->ctx->pb), this->ctx->filename, URL_WRONLY) < 0)
       {
           throw runtime_error("Couldn't open RTP output stream");
       }

       // add an H.264 stream
       this->stream = av_new_stream(this->ctx, 1);
       if (!this->stream)
       {
           throw runtime_error("Couldn't allocate H.264 stream");
       }

       // initalize codec
       AVCodecContext* c = this->stream->codec;
       c->codec_id = CODEC_ID_H264;
       c->codec_type = AVMEDIA_TYPE_VIDEO;
       c->bit_rate = bitrate;
       c->width = width;
       c->height = height;
       c->time_base.den = fps;
       c->time_base.num = 1;

       // write the header
       av_write_header(this->ctx);
    }

    This is where things seem to go wrong. av_write_header above seems to do absolutely nothing ; I've used wireshark to verify this. For reference, I use Streamer streamer(&enc, "10.89.6.3", 49990, 800, 600, 30, 40000); to initialize the Streamer instance, with enc being a reference to an Encoder object used to handle x264 previously.

    Now when I want to stream out a NAL, I use this :

    // grab a NAL
    NAL nal = this->encoder->nal_pop();
    cout << "NAL popped with size " << nal.size << endl;

    // initalize a packet
    AVPacket p;
    av_init_packet(&p);
    p.data = nal.payload;
    p.size = nal.size;
    p.stream_index = this->stream->index;

    // send it out
    av_write_frame(this->ctx, &p);

    At this point, I can see RTP data appearing over the network, and it looks like the frames I've been sending, even including a little copyright blob from x264. But, no player I've used has been able to make any sense of the data. VLC quits wanting an SDP description, which apparently isn't required.

    I then tried to play it through gst-launch :

    gst-launch udpsrc port=49990 ! rtph264depay ! decodebin ! xvimagesink

    This will sit waiting for UDP data, but when it is received, I get :

    ERROR : element /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0 : No RTP
    format was negotiated. Additional debug info :
    gstbasertpdepayload.c(372) : gst_base_rtp_depayload_chain () :
    /GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0 : Input buffers
    need to have RTP caps set on them. This is usually achieved by setting
    the 'caps' property of the upstream source element (often udpsrc or
    appsrc), or by putting a capsfilter element before the depayloader and
    setting the 'caps' property on that. Also see
    http://cgit.freedesktop.org/gstreamer/gst-plugins-good/tree/gst/rtp/README

    As I'm not using GStreamer to stream itself, I'm not quite sure what it means with RTP caps. But, it makes me wonder if I'm not sending enough information over RTP to describe the stream. I'm pretty new to video and I feel like there's some key thing I'm missing here. Any hints ?

  • FFmpegFrameGrabber video artefacts from RTSP network camera

    2 février 2015, par UncleChris

    I’m using JavaCV FFmpegFrameGrabber to grab frames from my network camera through RTSP protocol. Simplified code looks like this :

    /* from ini method */
    // url like: rtsp://ip:port/stream1
    grabber = new FFmpegFrameGrabber(stream.getUrl());
    // type: RTP
    grabber.setFormat(stream.getMediaType());
    grabber.start();

    /* it's called in while loop from outside */
    public void grab() throws FrameProcessorsException {

       try {
           LOGGER.info(grabber.getFrameNumber());
           frame = grabber.grab();
       } catch (FrameGrabber.Exception e) {
           throw new FrameProcessorsException(e);
       }

       // I save my frames to other grabber, to make mp4 file to watch it later
       try {
           videoRecorder.recordFrame(frame, grabber.getTimestamp(), grabber.getImageWidth(), grabber.getImageHeight(), grabber.getAudioChannels());
       } catch (FrameRecorder.Exception e) {
           throw new FrameProcessorsException(e);
       }

       // my processing, the troublemaker
       long currentFrameNum = grabber.getFrameNumber();
       if (processing && currentFrameNum - lastFrameWithAnalysis >= PROCESS_FREQUENCY) {

           lastFrameWithAnalysis = currentFrameNum;

           Mat frameMat = new Mat(frame, false);
           try {
               LOGGER.info("Processing :" + grabber.getFrameNumber());
               AnalysisResult result = frameAnalyzer.processFrame(frameMat, (int) currentFrameNum);
               videoAnalysisSaver.saveFrameAnalysisResult(frameMat, result, (int) currentFrameNum);
           } catch (ServerErrorException | NotExistException e) {
               LOGGER.warn(e);
           }

    In code You can see processing variable. If it’s set to false, I can watch my network streams on page with no problems. But if I set it to true, suddenly, I got visual artefacts, looking like this :

    http://answers.opencv.org/upfiles/1400931120927032.png

    And also I can see some infos on my logs :

    [libx264 @ 0x7fe2a7e2ae00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
    [libx264 @ 0x7fe2a7e2ae00] profile High, level 4.0
    [libx264 @ 0x7fe2a7e2ae00] 264 - core 142 - H.264/MPEG-4 AVC codec - Copyleft 2003-2014 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1,00:0,00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=abr mbtree=1 bitrate=400 ratetol=1,0 qcomp=0,60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1,40 aq=1:1,00
    [mp4 @ 0x7fe2909feee0] Using AVStream.codec.time_base as a timebase hint to the muxer is deprecated. Set AVStream.time_base instead.
    2015-02-02 10:34:31,986 INFO  [img.StreamGrabber] 6
    2015-02-02 10:34:31,998 INFO  [img.StreamGrabber] Processing :1
    2015-02-02 10:34:32,881 INFO  [img.analysis.face.PersonFaceRecognizer] Predicted face (1524, 1564, 678, 718) is above threshold
    2015-02-02 10:34:32,882 INFO  [img.analysis.face.PersonFaceRecognizer] Predicted face (1538, 1577, 678, 717) is above threshold
    2015-02-02 10:34:32,884 INFO  [img.analysis.face.PersonFaceRecognizer] Predicted face (1320, 1420, 298, 398) is above threshold
    2015-02-02 10:34:33,199 INFO  [img.StreamGrabber] 1
    2015-02-02 10:34:33,212 INFO  [img.StreamGrabber] 2
    2015-02-02 10:34:33,222 INFO  [img.StreamGrabber] 3
    2015-02-02 10:34:33,232 INFO  [img.StreamGrabber] 4
    2015-02-02 10:34:33,244 INFO  [img.StreamGrabber] 5
    2015-02-02 10:34:33,255 INFO  [img.StreamGrabber] Processing :6
    2015-02-02 10:34:33,870 INFO  [img.analysis.face.PersonFaceRecognizer] Predicted face (1537, 1578, 678, 719) is above threshold
    2015-02-02 10:34:33,871 INFO  [img.analysis.face.PersonFaceRecognizer] Predicted face (1315, 1422, 298, 405) is above threshold
    2015-02-02 10:34:34,318 INFO  [img.StreamGrabber] 6
    2015-02-02 10:34:34,338 INFO  [img.StreamGrabber] 7
    2015-02-02 10:34:34,347 INFO  [img.StreamGrabber] 8
    2015-02-02 10:34:34,357 INFO  [img.StreamGrabber] 9
    2015-02-02 10:34:34,368 INFO  [img.StreamGrabber] 10
    2015-02-02 10:34:34,379 INFO  [img.StreamGrabber] Processing :11
    2015-02-02 10:34:35,025 INFO  [img.analysis.face.PersonFaceRecognizer] Predicted face (1561, 1618, 477, 534) is above threshold
    2015-02-02 10:34:35,027 INFO  [img.analysis.face.PersonFaceRecognizer] Predicted face (1318, 1421, 300, 403) is above threshold
    2015-02-02 10:34:35,185 INFO  [img.StreamGrabber] 11
    2015-02-02 10:34:35,202 INFO  [img.StreamGrabber] 12
    2015-02-02 10:34:35,213 INFO  [img.StreamGrabber] 13
    2015-02-02 10:34:35,223 INFO  [img.StreamGrabber] 14
    2015-02-02 10:34:35,235 INFO  [img.StreamGrabber] 15
    2015-02-02 10:34:35,286 INFO  [img.StreamGrabber] Processing :16
    2015-02-02 10:34:35,952 INFO  [img.analysis.face.PersonFaceRecognizer] Predicted face (1429, 1470, 703, 744) is above threshold
    2015-02-02 10:34:35,954 INFO  [img.analysis.face.PersonFaceRecognizer] Predicted face (1315, 1422, 295, 402) is above threshold
    2015-02-02 10:34:36,218 INFO  [img.StreamGrabber] 16
    2015-02-02 10:34:36,237 INFO  [img.StreamGrabber] 17
    2015-02-02 10:34:36,246 INFO  [img.StreamGrabber] 18
    2015-02-02 10:34:36,257 INFO  [img.StreamGrabber] 19
    2015-02-02 10:34:36,268 INFO  [img.StreamGrabber] 20
    2015-02-02 10:34:36,279 INFO  [img.StreamGrabber] Processing :21
    2015-02-02 10:34:36,967 INFO  [img.analysis.face.PersonFaceRecognizer] Predicted face (1562, 1616, 480, 534) is above threshold
    2015-02-02 10:34:36,968 INFO  [img.analysis.face.PersonFaceRecognizer] Predicted face (1314, 1420, 296, 402) is above threshold
    2015-02-02 10:34:37,186 INFO  [img.StreamGrabber] 21
    2015-02-02 10:34:37,206 INFO  [img.StreamGrabber] 22
    2015-02-02 10:34:37,217 INFO  [img.StreamGrabber] 23
    2015-02-02 10:34:37,227 INFO  [img.StreamGrabber] 24
    [h264 @ 0x7fe2915b30a0] RTP: missed 1514 packets
    [h264 @ 0x7fe2f1050ea0] Cannot use next picture in error concealment
    [h264 @ 0x7fe2f1050ea0] concealing 4608 DC, 4608 AC, 4608 MV errors in P frame
    2015-02-02 10:34:37,238 INFO  [img.StreamGrabber] 25
    2015-02-02 10:34:37,250 INFO  [img.StreamGrabber] Processing :26
    2015-02-02 10:34:37,944 INFO  [img.analysis.face.PersonFaceRecognizer] Predicted face (1562, 1616, 479, 533) is above threshold
    2015-02-02 10:34:37,945 INFO  [img.analysis.face.PersonFaceRecognizer] Predicted face (1315, 1422, 297, 404) is above threshold
    2015-02-02 10:34:38,107 INFO  [img.StreamGrabber] 26
    [h264 @ 0x7fe2915b30a0] RTP: missed 295 packets
    [h264 @ 0x7fe2a5713e00] Cannot use next picture in error concealment
    [h264 @ 0x7fe2a5713e00] concealing 1996 DC, 1996 AC, 1996 MV errors in P frame
    2015-02-02 10:34:38,120 INFO  [img.StreamGrabber] 27
    2015-02-02 10:34:38,130 INFO  [img.StreamGrabber] 28
    2015-02-02 10:34:38,143 INFO  [img.StreamGrabber] 29
    2015-02-02 10:34:38,231 INFO  [img.StreamGrabber] 30
    2015-02-02 10:34:38,249 INFO  [img.StreamGrabber] Processing :31
    2015-02-02 10:34:38,962 INFO  [img.analysis.face.PersonFaceRecognizer] Predicted face (1170, 1211, 322, 363) is above threshold
    2015-02-02 10:34:38,964 INFO  [img.analysis.face.PersonFaceRecognizer] Predicted face (1316, 1421, 298, 403) is above threshold
    2015-02-02 10:34:39,329 INFO  [img.StreamGrabber] 31
    [h264 @ 0x7fe2915b30a0] RTP: missed 232 packets
    [h264 @ 0x7fe2a4203d80] Cannot use next picture in error concealment
    [h264 @ 0x7fe2a4203d80] concealing 1142 DC, 1142 AC, 1142 MV errors in P frame
    2015-02-02 10:34:39,342 INFO  [img.StreamGrabber] 32
    2015-02-02 10:34:39,352 INFO  [img.StreamGrabber] 33
    [h264 @ 0x7fe2915b30a0] RTP: missed 1 packets
    [h264 @ 0x7fe2915b43c0] corrupted macroblock 86 67 (total_coeff=-1)
    [h264 @ 0x7fe2915b43c0] error while decoding MB 86 67
    [h264 @ 0x7fe2915b43c0] Cannot use next picture in error concealment
    [h264 @ 0x7fe2915b43c0] concealing 83 DC, 83 AC, 83 MV errors in P frame
    2015-02-02 10:34:39,362 INFO  [img.StreamGrabber] Processing :144
    2015-02-02 10:34:40,071 INFO  [img.analysis.face.PersonFaceRecognizer] Predicted face (1563, 1614, 480, 531) is above threshold
    2015-02-02 10:34:40,074 INFO  [img.analysis.face.PersonFaceRecognizer] Predicted face (1318, 1423, 296, 401) is above threshold
    2015-02-02 10:34:40,462 INFO  [img.StreamGrabber] 144
    2015-02-02 10:34:40,482 INFO  [img.StreamGrabber] 145
    [h264 @ 0x7fe2915b30a0] RTP: missed 377 packets
    [h264 @ 0x7fe2a515baa0] Cannot use next picture in error concealment
    [h264 @ 0x7fe2a515baa0] concealing 6822 DC, 6822 AC, 6822 MV errors in P frame
    2015-02-02 10:34:40,494 INFO  [img.StreamGrabber] Processing :167
    2015-02-02 10:34:41,222 INFO  [img.analysis.face.PersonFaceRecognizer] Predicted face (1563, 1615, 479, 531) is above threshold
    2015-02-02 10:34:41,230 INFO  [img.analysis.face.PersonFaceRecognizer] Predicted face (1319, 1421, 295, 397) is above threshold
    2015-02-02 10:34:41,930 INFO  [img.StreamGrabber] 167
    2015-02-02 10:34:41,947 INFO  [img.StreamGrabber] 168
    2015-02-02 10:34:41,958 INFO  [img.StreamGrabber] 169
    2015-02-02 10:34:41,970 INFO  [img.StreamGrabber] 170
    2015-02-02 10:34:41,985 INFO  [img.StreamGrabber] 171
    [h264 @ 0x7fe2915b30a0] RTP: missed 311 packets
    [h264 @ 0x7fe2f10506c0] Cannot use next picture in error concealment
    [h264 @ 0x7fe2f10506c0] concealing 1409 DC, 1409 AC, 1409 MV errors in P frame
    2015-02-02 10:34:41,997 INFO  [img.StreamGrabber] Processing :190
    2015-02-02 10:34:42,715 INFO  [img.analysis.face.PersonFaceRecognizer] Predicted face (1322, 1384, 340, 402) is above threshold
    2015-02-02 10:34:42,717 INFO  [img.analysis.face.PersonFaceRecognizer] Predicted face (1312, 1425, 290, 403) is above threshold
    2015-02-02 10:34:42,929 INFO  [img.StreamGrabber] 190
    [h264 @ 0x7fe2915b30a0] RTP: missed 13 packets
    [h264 @ 0x7fe2f1050ea0] Cannot use next picture in error concealment
    [h264 @ 0x7fe2f1050ea0] concealing 6489 DC, 6489 AC, 6489 MV errors in P frame
    2015-02-02 10:34:42,943 INFO  [img.StreamGrabber] 191
    [h264 @ 0x7fe2915b30a0] RTP: missed 484 packets
    [h264 @ 0x7fe2915b43c0] concealing 6609 DC, 6609 AC, 6609 MV errors in I frame
    2015-02-02 10:34:42,957 INFO  [img.StreamGrabber] 192
    2015-02-02 10:34:42,970 INFO  [img.StreamGrabber] 193
    [h264 @ 0x7fe2915b30a0] RTP: missed 313 packets
    [h264 @ 0x7fe2a51a0fc0] Cannot use next picture in error concealment
    [h264 @ 0x7fe2a51a0fc0] concealing 1666 DC, 1666 AC, 1666 MV errors in P frame
    2015-02-02 10:34:43,271 INFO  [img.StreamGrabber] 194
    2015-02-02 10:34:43,314 INFO  [img.StreamGrabber] Processing :249
    2015-02-02 10:34:44,099 INFO  [img.analysis.face.PersonFaceRecognizer] Predicted face (1322, 1384, 340, 402) is above threshold
    2015-02-02 10:34:44,100 INFO  [img.analysis.face.PersonFaceRecognizer] Predicted face (1313, 1403, 300, 390) is above threshold
    2015-02-02 10:34:45,473 INFO  [img.save.event.EventMaker] Creating 1 face recognition events
    2015-02-02 10:34:45,618 INFO  [core.task.StreamRecordingTaskExecutor] Stream recording task ended: rtsp://MYURL

    My guess is, that my computer is simply too busy to catch all packages from camera stream. I’m operating on two streams, one is low quality with like 3 fps, and other is 30. Of course problems show up all the time on fast one, and rarely on slow stream.
    I’m wondering if there is possibility to force somehow FFmpegFrameGraber not to create artifacts, but simply drop current frame and go to next one ? Fps and frame continuity is not so important. I was trying to use grabber’s setfps, settimestamp, delayedGrab method’s to somehow slow down 30-fps stream, but it didn’t even react to that. I’m sure I’m doing something wrong.

    I’ve found some topics related to my problem, but they did not helped me, maybe You will see more :
    http://answers.opencv.org/question/34012/ip-camera-h264-error-while-decoding/
    How to deal with cv::VideoCapture decode errors ?
    http://superuser.com/questions/663928/ffmpeg-to-capture-stills-from-h-264-stream

    Thank You for Your help.

  • FFMPEG C api h.264 encoding / MPEG2 ts streaming problems

    3 mars 2015, par ccoral

    Class prototype is as follows :

    #ifndef _FULL_MOTION_VIDEO_STREAM_H_
    #define _FULL_MOTION_VIDEO_STREAM_H_

    #include <memory>
    #include <string>

    #ifndef INT64_C
    # define INT64_C(c) (c ## LL)
    # define UINT64_C(c) (c ## ULL)
    #endif

    extern "C"
    {
       #include "libavutil/opt.h"
       #include "libavcodec/avcodec.h"
       #include "libavutil/channel_layout.h"
       #include "libavutil/common.h"
       #include "libavutil/imgutils.h"
       #include "libavutil/mathematics.h"
       #include "libavutil/samplefmt.h"
       #include "libavformat/avformat.h"

       #include <libavutil></libavutil>timestamp.h>
       #include <libswscale></libswscale>swscale.h>
       #include <libswresample></libswresample>swresample.h>
    }

    class FMVStream
    {
       public:
           struct OutputStream
           {
               OutputStream() :
               st(0),
               next_pts(0),
               samples_count(0),
               frame(0),
               tmpFrame(0),
               sws_ctx(0)
               {
               }

               AVStream *st;

               /* pts of the next frame that will be generated */
               int64_t next_pts;
               int samples_count;

               AVFrame *frame;
               AVFrame *tmpFrame;

               struct SwsContext *sws_ctx;
           };

           ///
           /// Constructor
           ///
           FMVStream();

           ///
           /// Destructor
           ///
           ~FMVStream();

           ///
           /// Frame encoder helper function
           ///
           /// Encodes a raw RGB frame into the transport stream
           ///
           int EncodeFrame(uint8_t* frame);

           ///
           /// Frame width setter
           ///
           void setFrameWidth(int width);

           ///
           /// Frame width getter
           ///
           int getFrameWidth() const;

           ///
           /// Frame height setter
           ///
           void setFrameHeight(int height);

           ///
           /// Frame height getter
           ///
           int getFrameHeight() const;

           ///
           /// Stream address setter
           ///
           void setStreamAddress(const std::string&amp; address);

           ///
           /// Stream address getter
           ///
           std::string getStreamAddress() const;

       private:

           ///
           /// Video Stream creation
           ///
           AVStream* initVideoStream(AVFormatContext* oc);

           ///
           /// Raw frame transcoder
           ///
           /// This will convert the raw RGB frame to a raw YUV frame necessary for h.264 encoding
           ///
           void CopyFrameData(uint8_t* src_frame);

           ///
           /// Video frame allocator
           ///
           AVFrame* AllocPicture(PixelFormat pix_fmt, int width, int height);

           ///
           /// Debug print helper function
           ///
           void print_sdp(AVFormatContext **avc, int n);

           ///
           /// Write the frame to the stream
           ///
           int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt);

           ///
           /// initialize the frame data
           ///
           void initFrame();

           // formatting data needed for output streaming and the output container (MPEG 2 TS)
           AVOutputFormat* format;
           AVFormatContext* format_ctx;

           // structure container for our video stream
           OutputStream stream;

           AVIOContext* io_ctx;

           std::string streamFilename;

           int frameWidth;
           int frameHeight;
    };

    #endif
    </string></memory>

    This block starts the class declaration.

    #include "FullMotionVideoStream.h"

    #include <stdexcept>
    #include <iostream>

    FMVStream::FMVStream()
       : format(0),
       format_ctx(0),
       stream(),
       io_ctx(0),
       streamFilename("test.mpeg"),
       frameWidth(640),
       frameHeight(480)
    {
       // Register all formats and codecs
       av_register_all();
       avcodec_register_all();

       // Init networking
       avformat_network_init();

       // Find format
       this->format = av_guess_format("mpegts", NULL, NULL);

       // allocate the AVFormatContext
       this->format_ctx = avformat_alloc_context();

       if (!this->format_ctx)
       {
           throw std::runtime_error("avformat_alloc_context failed");
       }

       this->format_ctx->oformat = this->format;
       //sprintf_s(this->format_ctx->filename, sizeof(this->format_ctx->filename), "%s", this->streamFilename.c_str());

       this->stream.st = initVideoStream(this->format_ctx);

       this->initFrame();

       // Allocate AVIOContext
       int ret = avio_open(&amp;this->io_ctx, this->streamFilename.c_str(), AVIO_FLAG_WRITE);

       if (ret != 0)
       {
           throw std::runtime_error("avio_open failed");
       }

       this->format_ctx->pb = this->io_ctx;

       // Print some debug info about the format
       av_dump_format(this->format_ctx, 0, NULL, 1);

       // Begin the output by writing the container header
       avformat_write_header(this->format_ctx, NULL);

       AVFormatContext* ac[] = { this->format_ctx };
       print_sdp(ac, 1);
    }

    FMVStream::~FMVStream()
    {
       av_write_trailer(this->format_ctx);
       avcodec_close(this->stream.st->codec);

       avio_close(io_ctx);

       avformat_free_context(this->format_ctx);

       av_frame_free(&amp;this->stream.frame);
       av_free(this->format);
    }

    AVFrame* FMVStream::AllocPicture(PixelFormat pix_fmt, int width, int height)
    {
       // Allocate a frame
       AVFrame* frame = av_frame_alloc();

       if (frame == nullptr)
       {
           throw std::runtime_error("avcodec_alloc_frame failed");
       }

       if (av_image_alloc(frame->data, frame->linesize, width, height, pix_fmt, 1) &lt; 0)
       {
           throw std::runtime_error("av_image_alloc failed");
       }

       frame->width = width;
       frame->height = height;
       frame->format = pix_fmt;

       return frame;
    }

    void FMVStream::print_sdp(AVFormatContext **avc, int n)
    {
       char sdp[2048];
       av_sdp_create(avc, n, sdp, sizeof(sdp));
       printf("SDP:\n%s\n", sdp);
       fflush(stdout);
    }

    AVStream* FMVStream::initVideoStream(AVFormatContext *oc)
    {
       AVStream* st = avformat_new_stream(oc, NULL);

       if (st == nullptr)
       {
           std::runtime_error("Could not alloc stream");
       }

       AVCodec* codec = avcodec_find_encoder(AV_CODEC_ID_H264);

       if (codec == nullptr)
       {
           throw std::runtime_error("couldn't find mpeg2 encoder");
       }

       st->codec = avcodec_alloc_context3(codec);

       st->codec->codec_id = AV_CODEC_ID_H264;
       st->codec->codec_type = AVMEDIA_TYPE_VIDEO;
       st->codec->bit_rate = 400000;

       st->codec->width = this->frameWidth;
       st->codec->height = this->frameHeight;

       st->time_base.num = 1;
       st->time_base.den = 30;

       st->codec->framerate.num = 1;
       st->codec->framerate.den = 30;

       st->codec->max_b_frames = 2;
       st->codec->gop_size = 12;
       st->codec->pix_fmt = PIX_FMT_YUV420P;

       st->id = oc->nb_streams - 1;

       if (oc->oformat->flags &amp; AVFMT_GLOBALHEADER)
       {
           st->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
       }

       // option setup for the codec
       av_opt_set(st->codec->priv_data, "profile", "baseline", AV_OPT_SEARCH_CHILDREN);

       if (avcodec_open2(st->codec, codec, NULL) &lt; 0)
       {
           throw std::runtime_error("avcodec_open failed");
       }

       return st;
    }

    void FMVStream::initFrame()
    {
       // Allocate a tmp frame for converting our raw RGB data to YUV for encoding
       this->stream.tmpFrame = this->AllocPicture(PIX_FMT_RGB24, this->frameWidth, this->frameHeight);

       // Allocate a main frame
       this->stream.frame = this->AllocPicture(PIX_FMT_YUV420P, this->frameWidth, this->frameHeight);
    }
    </iostream></stdexcept>

    This block is attempting to convert from the raw RGB to our needed YUV format for h.264 encoding.

    void FMVStream::CopyFrameData(uint8_t* data)
    {
       // fill image with our raw RGB data
       //avpicture_alloc((AVPicture*)this->stream.tmpFrame, PIX_FMT_RGB24, this->stream.st->codec->width, this->stream.st->codec->height);

       int numBytes = avpicture_get_size(PIX_FMT_RGB24, this->stream.st->codec->width, this->stream.st->codec->height);

       uint8_t* buffer = (uint8_t*) av_malloc(numBytes * sizeof(uint8_t));

       avpicture_fill((AVPicture*)this->stream.tmpFrame, buffer, PIX_FMT_RGB24, this->stream.st->codec->width, this->stream.st->codec->height);

       for (int y = 0; y &lt; this->stream.st->codec->height; y++)
       {
           for (int x = 0; x &lt; this->stream.st->codec->width; x++)
           {
               int offset = 3 * (x + y * this->stream.st->codec->width);
               this->stream.tmpFrame->data[0][offset + 0] = data[x + y * this->stream.st->codec->width]; // R
               this->stream.tmpFrame->data[0][offset + 1] = data[x + y * this->stream.st->codec->width + 1]; // G
               this->stream.tmpFrame->data[0][offset + 2] = data[x + y * this->stream.st->codec->width + 2]; // B
           }
       }

       // convert the RGB frame to a YUV frame using the sws Context
       this->stream.sws_ctx = sws_getContext(this->stream.st->codec->width, this->stream.st->codec->height, PIX_FMT_RGB32, this->stream.st->codec->width, this->stream.st->codec->height, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);

       // use the scale function to transcode this raw frame to the correct type
       sws_scale(this->stream.sws_ctx, this->stream.tmpFrame->data, this->stream.tmpFrame->linesize, 0, this->stream.st->codec->height, this->stream.frame->data, this->stream.frame->linesize);
    }

    This is the block that encodes the raw data to h.264, and then send it out the Mpeg2 ts. I believe the problem lies within this block. I can put a break point in my write frame block and see that frames are being written, however, opening the resulting file in VLC results in a blank video. The file is approx 2Mb.

    int FMVStream::EncodeFrame(uint8_t* data)
    {
       AVCodecContext* c = this->stream.st->codec;

       AVRational one;
       one.den = one.num = 1;

       // check to see if we want to keep writing frames we can probably change this to a toggle switch
       if (av_compare_ts(this->stream.next_pts, this->stream.st->codec->time_base, 10, one) >= 0)
       {
           this->stream.frame = nullptr;
       }
       else
       {
           // Convert and load the frame data into the AVFrame struct
           CopyFrameData(data);
       }

       // setup the timestamp stepping
       AVPacket pkt = { 0 };
       av_init_packet(&amp;pkt);
       this->stream.frame->pts = (int64_t)((1.0 / this->stream.st->codec->framerate.den) * 90000.0 * this->stream.next_pts++);

       int gotPacket, out_size, ret;

       out_size = avcodec_encode_video2(c, &amp;pkt, this->stream.frame, &amp;gotPacket);


       if (gotPacket == 1)
       {
           ret = write_frame(this->format_ctx, &amp;c->time_base, this->stream.st, &amp;pkt);
       }
       else
       {
           ret = 0;
       }

       if (ret &lt; 0)
       {
           std::cerr &lt;&lt; "Error writing video frame" &lt;&lt; std::endl;
       }

       av_free_packet(&amp;pkt);

       return ((this->stream.frame != nullptr) || gotPacket) ? 0 : 1;
    }

    int FMVStream::write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt)
    {
       /* rescale output packet timestamp values from codec to stream timebase */
       av_packet_rescale_ts(pkt, *time_base, st->time_base);
       pkt->stream_index = st->index;

       return av_interleaved_write_frame(fmt_ctx, pkt);
    }

    void FMVStream::setFrameWidth(const int width)
    {
       this->frameWidth = width;
    }

    int FMVStream::getFrameWidth() const
    {
       return this->frameWidth;
    }

    void FMVStream::setFrameHeight(const int height)
    {
       this->frameHeight = height;
    }

    int FMVStream::getFrameHeight() const
    {
       return this->frameHeight;
    }

    void FMVStream::setStreamAddress(const std::string&amp; address)
    {
       this->streamFilename = address;
    }

    std::string FMVStream::getStreamAddress() const
    {
       return this->streamFilename;
    }

    Here is the Main function.

    #include "FullMotionVideoStream.h"

    #include <iostream>
    #include <thread>
    #include <chrono>

    int main(int argc, char** argv)
    {
       FMVStream* fmv = new FMVStream;

       fmv->setFrameWidth(640);
       fmv->setFrameHeight(480);

       std::cout &lt;&lt; "Streaming Address: " &lt;&lt; fmv->getStreamAddress() &lt;&lt; std::endl;

       // create our alternating frame of black and white to test the streaming functionality
       uint8_t white[640 * 480 * sizeof(uint8_t) * 3];
       uint8_t black[640 * 480 * sizeof(uint8_t) * 3];

       std::memset(white, 255, 640 * 480 * sizeof(uint8_t) * 3);
       std::memset(black, 0, 640 * 480 * sizeof(uint8_t)* 3);

       for (auto i = 0; i &lt; 100; i++)
       {
           auto ret = fmv->EncodeFrame(white);

           if (ret != 0)
           {
               std::cerr &lt;&lt; "There was a problem encoding the frame: " &lt;&lt; i &lt;&lt; std::endl;
           }

           std::this_thread::sleep_for(std::chrono::milliseconds(10));
       }

       for (auto i = 0; i &lt; 100; i++)
       {
           auto ret = fmv->EncodeFrame(black);

           if (ret != 0)
           {
               std::cerr &lt;&lt; "There was a problem encoding the frame: " &lt;&lt; i &lt;&lt; std::endl;
           }

           std::this_thread::sleep_for(std::chrono::milliseconds(10));
       }

       delete fmv;
    }
    </chrono></thread></iostream>

    Here is the resultant output via the console / my print SDP function.

    [libx264 @ 000000ac95f58440] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2
    AVX FMA3 AVX2 LZCNT BMI2
    [libx264 @ 000000ac95f58440] profile Constrained Baseline, level 3.0
    Output #0, mpegts, to '(null)':
       Stream #0:0: Video: h264 (libx264), yuv420p, 640x480, q=-1--1, 400 kb/s, 30
    tbn
    SDP:
    v=0
    o=- 0 0 IN IP4 127.0.0.1
    s=No Name
    t=0 0
    a=tool:libavformat 56.23.104
    m=video 0 RTP/AVP 96
    b=AS:400
    a=rtpmap:96 H264/90000
    a=fmtp:96 packetization-mode=1
    a=control:streamid=0

    Streaming Address: test.mpeg
    [libx264 @ 000000ac95f58440] frame I:45    Avg QP: 0.51  size:  1315
    [libx264 @ 000000ac95f58440] frame P:136   Avg QP: 0.29  size:   182
    [libx264 @ 000000ac95f58440] mb I  I16..4: 99.7%  0.0%  0.3%
    [libx264 @ 000000ac95f58440] mb P  I16..4:  0.1%  0.0%  0.1%  P16..4:  0.1%  0.0
    %  0.0%  0.0%  0.0%    skip:99.7%
    [libx264 @ 000000ac95f58440] final ratefactor: -68.99
    [libx264 @ 000000ac95f58440] coded y,uvDC,uvAC intra: 0.5% 0.5% 0.5% inter: 0.0%
    0.1% 0.1%
    [libx264 @ 000000ac95f58440] i16 v,h,dc,p: 96%  0%  3%  0%
    [libx264 @ 000000ac95f58440] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu:  1% 10% 85%  0%  3%
    0%  1%  0%  0%
    [libx264 @ 000000ac95f58440] i8c dc,h,v,p: 100%  0%  0%  0%
    [libx264 @ 000000ac95f58440] ref P L0: 46.8% 25.2% 28.0%
    [libx264 @ 000000ac95f58440] kb/s:0.03

    I know there are probably many issues with this program, I am very new with FFMPEG and multimedia programming in general. Ive used many pieces of code found through searching google/ stack overflow to get to this point as is. The file has a good size but comes up as length 0.04 tells me that my time stamping must be broken between the frames / pkts, but I am unsure on how to fix this issue.

    I tried inspecting the file with ffmpeg.exe using ffmpeg -i and outputting to a regular TS. It seems my code works more then I originally intended however, I am simply trying to output a bunch of all white frames.

    ffmpeg -i test.mpeg test.ts
    ffmpeg version N-70125-g6c9537b Copyright (c) 2000-2015 the FFmpeg developers
     built with gcc 4.9.2 (GCC)
     configuration: --disable-static --enable-shared --enable-gpl --enable-version3
    --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --ena
    ble-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --e
    nable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-lib
    gsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencor
    e-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enabl
    e-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-l
    ibtheora --enable-libtwolame --enable-libvidstab --enable-libvo-aacenc --enable-
    libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-l
    ibwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --ena
    ble-lzma --enable-decklink --enable-zlib
     libavutil      54. 19.100 / 54. 19.100
     libavcodec     56. 26.100 / 56. 26.100
     libavformat    56. 23.104 / 56. 23.104
     libavdevice    56.  4.100 / 56.  4.100
     libavfilter     5. 11.101 /  5. 11.101
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  1.100 /  1.  1.100
     libpostproc    53.  3.100 / 53.  3.100
    Input #0, mpegts, from 'test.mpeg':
     Duration: 00:00:00.04, start: 0.000000, bitrate: 24026 kb/s
     Program 1
       Metadata:
         service_name    : Service01
         service_provider: FFmpeg
       Stream #0:0[0x100]: Video: h264 (Constrained Baseline) ([27][0][0][0] / 0x00
    1B), yuv420p, 640x480, 25 fps, 25 tbr, 90k tbn, 50 tbc
    File 'test.ts' already exists. Overwrite ? [y/N] y
    Output #0, mpegts, to 'test.ts':
     Metadata:
       encoder         : Lavf56.23.104
       Stream #0:0: Video: mpeg2video, yuv420p, 640x480, q=2-31, 200 kb/s, 25 fps,
    90k tbn, 25 tbc
       Metadata:
         encoder         : Lavc56.26.100 mpeg2video
    Stream mapping:
     Stream #0:0 -> #0:0 (h264 (native) -> mpeg2video (native))
    Press [q] to stop, [?] for help
    frame=    3 fps=0.0 q=2.0 Lsize=       9kB time=00:00:00.08 bitrate= 883.6kbits/
    s dup=0 drop=178
    video:7kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing ove
    rhead: 22.450111%