Recherche avancée

Médias (91)

Autres articles (66)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

Sur d’autres sites (3960)

  • ffmpeg error "Could not allocate picture : Invalid argument Found Video Stream Found Audio Stream"

    26 octobre 2020, par Dinkan

    I am trying to write a C program to stream AV by copying both AV codecs with rtp_mpegts using RTP over network

    


    ffmpeg -re -i Sample_AV_15min.ts -acodec copy -vcodec copy -f rtp_mpegts rtp://192.168.1.1:5004


    


    using muxing.c as example which used ffmpeg libraries.
ffmpeg application works fine.

    


    Stream details

    


    Input #0, mpegts, from 'Weather_Nation_10min.ts':
  Duration: 00:10:00.38, start: 41313.400811, bitrate: 2840 kb/s
  Program 1
    Stream #0:0[0x11]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p, 1440x1080 [SAR 4:3 DAR 16:9], 29.97 fps, 59.94 tbr, 90k tbn, 59.94 tbc
    Stream #0:1[0x14]: Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, stereo, fltp, 448 kb/s
Output #0, rtp_mpegts, to 'rtp://192.168.1.1:5004':
  Metadata:
    encoder         : Lavf54.63.104
    Stream #0:0: Video: h264 ([27][0][0][0] / 0x001B), yuv420p, 1440x1080 [SAR 4:3 DAR 16:9], q=2-31, 29.97 fps, 90k tbn, 29.97 tbc
    Stream #0:1: Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, stereo, 448 kb/s
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
  Stream #0:1 -> #0:1 (copy)


    


    However, my application fails with

    


    ./my_test_app Sample_AV_15min.ts rtp://192.168.1.1:5004  
[h264 @ 0x800b30] non-existing PPS referenced                                  
[h264 @ 0x800b30] non-existing PPS 0 referenced                        
[h264 @ 0x800b30] decode_slice_header error                            
[h264 @ 0x800b30] no frame! 

[....snipped...]
[h264 @ 0x800b30] non-existing PPS 0 referenced        
[h264 @ 0x800b30] non-existing PPS referenced  
[h264 @ 0x800b30] non-existing PPS 0 referenced  
[h264 @ 0x800b30] decode_slice_header error  
[h264 @ 0x800b30] no frame!  
[h264 @ 0x800b30] mmco: unref short failure  
[h264 @ 0x800b30] mmco: unref short failure

[mpegts @ 0x800020] max_analyze_duration 5000000 reached at 5024000 microseconds  
[mpegts @ 0x800020] PES packet size mismatch could not find codec tag for codec id 
17075200, default to 0.  could not find codec tag for codec id 86019, default to 0.  
Could not allocate picture: Invalid argument  
Found Video Stream Found Audio Stream


    


    How do I fix this ? My complete source code based on muxing.c

    


    /**&#xA; * @file&#xA; * libavformat API example.&#xA; *&#xA; * Output a media file in any supported libavformat format.&#xA; * The default codecs are used.&#xA; * @example doc/examples/muxing.c&#xA; */&#xA;&#xA;#include &#xA;#include &#xA;#include &#xA;#include &#xA;&#xA;#include <libavutil></libavutil>mathematics.h>&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libswscale></libswscale>swscale.h>&#xA;&#xA;/* 5 seconds stream duration */&#xA;#define STREAM_DURATION   200.0&#xA;#define STREAM_FRAME_RATE 25 /* 25 images/s */&#xA;#define STREAM_NB_FRAMES  ((int)(STREAM_DURATION * STREAM_FRAME_RATE))&#xA;#define STREAM_PIX_FMT    AV_PIX_FMT_YUV420P /* default pix_fmt */&#xA;&#xA;static int sws_flags = SWS_BICUBIC;&#xA;&#xA;/**************************************************************/&#xA;/* audio output */&#xA;&#xA;static float t, tincr, tincr2;&#xA;static int16_t *samples;&#xA;static int audio_input_frame_size;&#xA;#if 0&#xA;/* Add an output stream. */&#xA;static AVStream *add_stream(AVFormatContext *oc, AVCodec **codec,&#xA;                            enum AVCodecID codec_id)&#xA;{&#xA;    AVCodecContext *c;&#xA;    AVStream *st;&#xA;&#xA;    /* find the encoder */&#xA;    *codec = avcodec_find_encoder(codec_id);&#xA;    if (!(*codec)) {&#xA;        fprintf(stderr, "Could not find encoder for &#x27;%s&#x27;\n",&#xA;                avcodec_get_name(codec_id));&#xA;        exit(1);&#xA;    }&#xA;&#xA;    st = avformat_new_stream(oc, *codec);&#xA;    if (!st) {&#xA;        fprintf(stderr, "Could not allocate stream\n");&#xA;        exit(1);&#xA;    }&#xA;    st->id = oc->nb_streams-1;&#xA;    c = st->codec;&#xA;&#xA;    switch ((*codec)->type) {&#xA;    case AVMEDIA_TYPE_AUDIO:&#xA;        st->id = 1;&#xA;        c->sample_fmt  = AV_SAMPLE_FMT_S16;&#xA;        c->bit_rate    = 64000;&#xA;        c->sample_rate = 44100;&#xA;        c->channels    = 2;&#xA;        break;&#xA;&#xA;    case AVMEDIA_TYPE_VIDEO:&#xA;        c->codec_id = codec_id;&#xA;&#xA;        c->bit_rate = 400000;&#xA;        /* Resolution must be a multiple of two. */&#xA;        c->width    = 352;&#xA;        c->height   = 288;&#xA;        /* timebase: This is the fundamental unit of time (in seconds) in terms&#xA;         * of which frame timestamps are represented. For fixed-fps content,&#xA;         * timebase should be 1/framerate and timestamp increments should be&#xA;         * identical to 1. */&#xA;        c->time_base.den = STREAM_FRAME_RATE;&#xA;        c->time_base.num = 1;&#xA;        c->gop_size      = 12; /* emit one intra frame every twelve frames at most */&#xA;        c->pix_fmt       = STREAM_PIX_FMT;&#xA;        if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {&#xA;            /* just for testing, we also add B frames */&#xA;            c->max_b_frames = 2;&#xA;        }&#xA;        if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {&#xA;            /* Needed to avoid using macroblocks in which some coeffs overflow.&#xA;             * This does not happen with normal video, it just happens here as&#xA;             * the motion of the chroma plane does not match the luma plane. */&#xA;            c->mb_decision = 2;&#xA;        }&#xA;    break;&#xA;&#xA;    default:&#xA;        break;&#xA;    }&#xA;&#xA;    /* Some formats want stream headers to be separate. */&#xA;    if (oc->oformat->flags &amp; AVFMT_GLOBALHEADER)&#xA;        c->flags |= CODEC_FLAG_GLOBAL_HEADER;&#xA;&#xA;    return st;&#xA;}&#xA;#endif &#xA;/**************************************************************/&#xA;/* audio output */&#xA;&#xA;static float t, tincr, tincr2;&#xA;static int16_t *samples;&#xA;static int audio_input_frame_size;&#xA;&#xA;static void open_audio(AVFormatContext *oc, AVCodec *codec, AVStream *st)&#xA;{&#xA;    AVCodecContext *c;&#xA;    int ret;&#xA;&#xA;    c = st->codec;&#xA;&#xA;    /* open it */&#xA;    ret = avcodec_open2(c, codec, NULL);&#xA;    if (ret &lt; 0) {&#xA;        fprintf(stderr, "Could not open audio codec: %s\n", av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;&#xA;    /* init signal generator */&#xA;    t     = 0;&#xA;    tincr = 2 * M_PI * 110.0 / c->sample_rate;&#xA;    /* increment frequency by 110 Hz per second */&#xA;    tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;&#xA;&#xA;    if (c->codec->capabilities &amp; CODEC_CAP_VARIABLE_FRAME_SIZE)&#xA;        audio_input_frame_size = 10000;&#xA;    else&#xA;        audio_input_frame_size = c->frame_size;&#xA;    samples = av_malloc(audio_input_frame_size *&#xA;                        av_get_bytes_per_sample(c->sample_fmt) *&#xA;                        c->channels);&#xA;    if (!samples) {&#xA;        fprintf(stderr, "Could not allocate audio samples buffer\n");&#xA;        exit(1);&#xA;    }&#xA;}&#xA;&#xA;/* Prepare a 16 bit dummy audio frame of &#x27;frame_size&#x27; samples and&#xA; * &#x27;nb_channels&#x27; channels. */&#xA;static void get_audio_frame(int16_t *samples, int frame_size, int nb_channels)&#xA;{&#xA;    int j, i, v;&#xA;    int16_t *q;&#xA;&#xA;    q = samples;&#xA;    for (j = 0; j &lt; frame_size; j&#x2B;&#x2B;) {&#xA;        v = (int)(sin(t) * 10000);&#xA;        for (i = 0; i &lt; nb_channels; i&#x2B;&#x2B;)&#xA;            *q&#x2B;&#x2B; = v;&#xA;        t     &#x2B;= tincr;&#xA;        tincr &#x2B;= tincr2;&#xA;    }&#xA;}&#xA;&#xA;static void write_audio_frame(AVFormatContext *oc, AVStream *st)&#xA;{&#xA;    AVCodecContext *c;&#xA;    AVPacket pkt = { 0 }; // data and size must be 0;&#xA;    AVFrame *frame = avcodec_alloc_frame();&#xA;    int got_packet, ret;&#xA;&#xA;    av_init_packet(&amp;pkt);&#xA;    c = st->codec;&#xA;&#xA;    get_audio_frame(samples, audio_input_frame_size, c->channels);&#xA;    frame->nb_samples = audio_input_frame_size;&#xA;    avcodec_fill_audio_frame(frame, c->channels, c->sample_fmt,&#xA;                             (uint8_t *)samples,&#xA;                             audio_input_frame_size *&#xA;                             av_get_bytes_per_sample(c->sample_fmt) *&#xA;                             c->channels, 1);&#xA;&#xA;    ret = avcodec_encode_audio2(c, &amp;pkt, frame, &amp;got_packet);&#xA;    if (ret &lt; 0) {&#xA;        fprintf(stderr, "Error encoding audio frame: %s\n", av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;&#xA;    if (!got_packet)&#xA;        return;&#xA;&#xA;    pkt.stream_index = st->index;&#xA;&#xA;    /* Write the compressed frame to the media file. */&#xA;    ret = av_interleaved_write_frame(oc, &amp;pkt);&#xA;    if (ret != 0) {&#xA;        fprintf(stderr, "Error while writing audio frame: %s\n",&#xA;                av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;    avcodec_free_frame(&amp;frame);&#xA;}&#xA;&#xA;static void close_audio(AVFormatContext *oc, AVStream *st)&#xA;{&#xA;    avcodec_close(st->codec);&#xA;&#xA;    av_free(samples);&#xA;}&#xA;&#xA;/**************************************************************/&#xA;/* video output */&#xA;&#xA;static AVFrame *frame;&#xA;static AVPicture src_picture, dst_picture;&#xA;static int frame_count;&#xA;&#xA;static void open_video(AVFormatContext *oc, AVCodec *codec, AVStream *st)&#xA;{&#xA;    int ret;&#xA;    AVCodecContext *c = st->codec;&#xA;&#xA;    /* open the codec */&#xA;    ret = avcodec_open2(c, codec, NULL);&#xA;    if (ret &lt; 0) {&#xA;        fprintf(stderr, "Could not open video codec: %s\n", av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;&#xA;    /* allocate and init a re-usable frame */&#xA;    frame = avcodec_alloc_frame();&#xA;    if (!frame) {&#xA;        fprintf(stderr, "Could not allocate video frame\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    /* Allocate the encoded raw picture. */&#xA;    ret = avpicture_alloc(&amp;dst_picture, c->pix_fmt, c->width, c->height);&#xA;    if (ret &lt; 0) {&#xA;        fprintf(stderr, "Could not allocate picture: %s\n", av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;&#xA;    /* If the output format is not YUV420P, then a temporary YUV420P&#xA;     * picture is needed too. It is then converted to the required&#xA;     * output format. */&#xA;    if (c->pix_fmt != AV_PIX_FMT_YUV420P) {&#xA;        ret = avpicture_alloc(&amp;src_picture, AV_PIX_FMT_YUV420P, c->width, c->height);&#xA;        if (ret &lt; 0) {&#xA;            fprintf(stderr, "Could not allocate temporary picture: %s\n",&#xA;                    av_err2str(ret));&#xA;            exit(1);&#xA;        }&#xA;    }&#xA;&#xA;    /* copy data and linesize picture pointers to frame */&#xA;    *((AVPicture *)frame) = dst_picture;&#xA;}&#xA;&#xA;/* Prepare a dummy image. */&#xA;static void fill_yuv_image(AVPicture *pict, int frame_index,&#xA;                           int width, int height)&#xA;{&#xA;    int x, y, i;&#xA;&#xA;    i = frame_index;&#xA;&#xA;    /* Y */&#xA;    for (y = 0; y &lt; height; y&#x2B;&#x2B;)&#xA;        for (x = 0; x &lt; width; x&#x2B;&#x2B;)&#xA;            pict->data[0][y * pict->linesize[0] &#x2B; x] = x &#x2B; y &#x2B; i * 3;&#xA;&#xA;    /* Cb and Cr */&#xA;    for (y = 0; y &lt; height / 2; y&#x2B;&#x2B;) {&#xA;        for (x = 0; x &lt; width / 2; x&#x2B;&#x2B;) {&#xA;            pict->data[1][y * pict->linesize[1] &#x2B; x] = 128 &#x2B; y &#x2B; i * 2;&#xA;            pict->data[2][y * pict->linesize[2] &#x2B; x] = 64 &#x2B; x &#x2B; i * 5;&#xA;        }&#xA;    }&#xA;}&#xA;&#xA;static void write_video_frame(AVFormatContext *oc, AVStream *st)&#xA;{&#xA;    int ret;&#xA;    static struct SwsContext *sws_ctx;&#xA;    AVCodecContext *c = st->codec;&#xA;&#xA;    if (frame_count >= STREAM_NB_FRAMES) {&#xA;        /* No more frames to compress. The codec has a latency of a few&#xA;         * frames if using B-frames, so we get the last frames by&#xA;         * passing the same picture again. */&#xA;    } else {&#xA;        if (c->pix_fmt != AV_PIX_FMT_YUV420P) {&#xA;            /* as we only generate a YUV420P picture, we must convert it&#xA;             * to the codec pixel format if needed */&#xA;            if (!sws_ctx) {&#xA;                sws_ctx = sws_getContext(c->width, c->height, AV_PIX_FMT_YUV420P,&#xA;                                         c->width, c->height, c->pix_fmt,&#xA;                                         sws_flags, NULL, NULL, NULL);&#xA;                if (!sws_ctx) {&#xA;                    fprintf(stderr,&#xA;                            "Could not initialize the conversion context\n");&#xA;                    exit(1);&#xA;                }&#xA;            }&#xA;            fill_yuv_image(&amp;src_picture, frame_count, c->width, c->height);&#xA;            sws_scale(sws_ctx,&#xA;                      (const uint8_t * const *)src_picture.data, src_picture.linesize,&#xA;                      0, c->height, dst_picture.data, dst_picture.linesize);&#xA;        } else {&#xA;            fill_yuv_image(&amp;dst_picture, frame_count, c->width, c->height);&#xA;        }&#xA;    }&#xA;&#xA;    if (oc->oformat->flags &amp; AVFMT_RAWPICTURE) {&#xA;        /* Raw video case - directly store the picture in the packet */&#xA;        AVPacket pkt;&#xA;        av_init_packet(&amp;pkt);&#xA;&#xA;        pkt.flags        |= AV_PKT_FLAG_KEY;&#xA;        pkt.stream_index  = st->index;&#xA;        pkt.data          = dst_picture.data[0];&#xA;        pkt.size          = sizeof(AVPicture);&#xA;&#xA;        ret = av_interleaved_write_frame(oc, &amp;pkt);&#xA;    } else {&#xA;        /* encode the image */&#xA;        AVPacket pkt;&#xA;        int got_output;&#xA;&#xA;        av_init_packet(&amp;pkt);&#xA;        pkt.data = NULL;    // packet data will be allocated by the encoder&#xA;        pkt.size = 0;&#xA;&#xA;        ret = avcodec_encode_video2(c, &amp;pkt, frame, &amp;got_output);&#xA;        if (ret &lt; 0) {&#xA;            fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret));&#xA;            exit(1);&#xA;        }&#xA;&#xA;        /* If size is zero, it means the image was buffered. */&#xA;        if (got_output) {&#xA;            if (c->coded_frame->key_frame)&#xA;                pkt.flags |= AV_PKT_FLAG_KEY;&#xA;&#xA;            pkt.stream_index = st->index;&#xA;&#xA;            /* Write the compressed frame to the media file. */&#xA;            ret = av_interleaved_write_frame(oc, &amp;pkt);&#xA;        } else {&#xA;            ret = 0;&#xA;        }&#xA;    }&#xA;    if (ret != 0) {&#xA;        fprintf(stderr, "Error while writing video frame: %s\n", av_err2str(ret));&#xA;        exit(1);&#xA;    }&#xA;    frame_count&#x2B;&#x2B;;&#xA;}&#xA;&#xA;static void close_video(AVFormatContext *oc, AVStream *st)&#xA;{&#xA;    avcodec_close(st->codec);&#xA;    av_free(src_picture.data[0]);&#xA;    av_free(dst_picture.data[0]);&#xA;    av_free(frame);&#xA;}&#xA;&#xA;/**************************************************************/&#xA;/* media file output */&#xA;&#xA;int main(int argc, char **argv)&#xA;{&#xA;    const char *filename;&#xA;    AVOutputFormat *fmt;&#xA;    AVFormatContext *oc;&#xA;    AVStream *audio_st, *video_st;&#xA;    AVCodec *audio_codec, *video_codec;&#xA;    double audio_pts, video_pts;&#xA;    int ret;&#xA;    char errbuf[50];&#xA;    int i = 0;&#xA;    /* Initialize libavcodec, and register all codecs and formats. */&#xA;    av_register_all();&#xA;&#xA;    if (argc != 3) {&#xA;        printf("usage: %s input_file out_file|stream\n"&#xA;               "API example program to output a media file with libavformat.\n"&#xA;               "This program generates a synthetic audio and video stream, encodes and\n"&#xA;               "muxes them into a file named output_file.\n"&#xA;               "The output format is automatically guessed according to the file extension.\n"&#xA;               "Raw images can also be output by using &#x27;%%d&#x27; in the filename.\n"&#xA;               "\n", argv[0]);&#xA;        return 1;&#xA;    }&#xA;&#xA;    filename = argv[2];&#xA;&#xA;    /* allocate the output media context */&#xA;    avformat_alloc_output_context2(&amp;oc, NULL, "rtp_mpegts", filename);&#xA;    if (!oc) {&#xA;        printf("Could not deduce output format from file extension: using MPEG.\n");&#xA;        avformat_alloc_output_context2(&amp;oc, NULL, "mpeg", filename);&#xA;    }&#xA;    if (!oc) {&#xA;        return 1;&#xA;    }&#xA;    fmt = oc->oformat;&#xA;    //Find input stream info.&#xA;&#xA;   video_st = NULL;&#xA;   audio_st = NULL;&#xA;&#xA;   avformat_open_input( &amp;oc, argv[1], 0, 0);&#xA;&#xA;   if ((ret = avformat_find_stream_info(oc, 0))&lt; 0)&#xA;   {&#xA;       av_strerror(ret, errbuf,sizeof(errbuf));&#xA;       printf("Not Able to find stream info::%s ", errbuf);&#xA;       ret = -1;&#xA;       return ret;&#xA;   }&#xA;   for (i = 0; i &lt; oc->nb_streams; i&#x2B;&#x2B;)&#xA;   {&#xA;       if(oc->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)&#xA;       {&#xA;           AVCodecContext *codec_ctx;&#xA;           unsigned int tag = 0;&#xA;&#xA;           printf("Found Video Stream ");&#xA;           video_st = oc->streams[i];&#xA;           codec_ctx = video_st->codec;&#xA;           // m_num_frames = oc->streams[i]->nb_frames;&#xA;           video_codec = avcodec_find_decoder(codec_ctx->codec_id);&#xA;           ret = avcodec_open2(codec_ctx, video_codec, NULL);&#xA;            if (ret &lt; 0) &#xA;            {&#xA;                av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);&#xA;                return ret;&#xA;            }&#xA;            if (av_codec_get_tag2(oc->oformat->codec_tag, video_codec->id, &amp;tag) == 0) &#xA;            {&#xA;                av_log(NULL, AV_LOG_ERROR, "could not find codec tag for codec id %d, default to 0.\n", audio_codec->id);&#xA;            }&#xA;            video_st->codec = avcodec_alloc_context3(video_codec);&#xA;            video_st->codec->codec_tag = tag;&#xA;       }&#xA;&#xA;       if(oc->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO)&#xA;       {&#xA;           AVCodecContext *codec_ctx;&#xA;           unsigned int tag = 0;&#xA;&#xA;           printf("Found Audio Stream ");&#xA;           audio_st = oc->streams[i];&#xA;          // aud_dts = audio_st->cur_dts;&#xA;          // aud_pts = audio_st->last_IP_pts;           &#xA;          codec_ctx = audio_st->codec;&#xA;          audio_codec = avcodec_find_decoder(codec_ctx->codec_id);&#xA;          ret = avcodec_open2(codec_ctx, audio_codec, NULL);&#xA;          if (ret &lt; 0) &#xA;          {&#xA;             av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);&#xA;             return ret;&#xA;          }&#xA;          if (av_codec_get_tag2(oc->oformat->codec_tag, audio_codec->id, &amp;tag) == 0) &#xA;          {&#xA;              av_log(NULL, AV_LOG_ERROR, "could not find codec tag for codec id %d, default to 0.\n", audio_codec->id);&#xA;          }&#xA;          audio_st->codec = avcodec_alloc_context3(audio_codec);&#xA;          audio_st->codec->codec_tag = tag;&#xA;       }&#xA;   }&#xA;    /* Add the audio and video streams using the default format codecs&#xA;     * and initialize the codecs. */&#xA;    /*&#xA;    if (fmt->video_codec != AV_CODEC_ID_NONE) {&#xA;        video_st = add_stream(oc, &amp;video_codec, fmt->video_codec);&#xA;    }&#xA;    if (fmt->audio_codec != AV_CODEC_ID_NONE) {&#xA;        audio_st = add_stream(oc, &amp;audio_codec, fmt->audio_codec);&#xA;    }&#xA;    */&#xA;&#xA;    /* Now that all the parameters are set, we can open the audio and&#xA;     * video codecs and allocate the necessary encode buffers. */&#xA;    if (video_st)&#xA;        open_video(oc, video_codec, video_st);&#xA;    if (audio_st)&#xA;        open_audio(oc, audio_codec, audio_st);&#xA;&#xA;    av_dump_format(oc, 0, filename, 1);&#xA;&#xA;    /* open the output file, if needed */&#xA;    if (!(fmt->flags &amp; AVFMT_NOFILE)) {&#xA;        ret = avio_open(&amp;oc->pb, filename, AVIO_FLAG_WRITE);&#xA;        if (ret &lt; 0) {&#xA;            fprintf(stderr, "Could not open &#x27;%s&#x27;: %s\n", filename,&#xA;                    av_err2str(ret));&#xA;            return 1;&#xA;        }&#xA;    }&#xA;&#xA;    /* Write the stream header, if any. */&#xA;    ret = avformat_write_header(oc, NULL);&#xA;    if (ret &lt; 0) {&#xA;        fprintf(stderr, "Error occurred when opening output file: %s\n",&#xA;                av_err2str(ret));&#xA;        return 1;&#xA;    }&#xA;&#xA;    if (frame)&#xA;        frame->pts = 0;&#xA;    for (;;) {&#xA;        /* Compute current audio and video time. */&#xA;        if (audio_st)&#xA;            audio_pts = (double)audio_st->pts.val * audio_st->time_base.num / audio_st->time_base.den;&#xA;        else&#xA;            audio_pts = 0.0;&#xA;&#xA;        if (video_st)&#xA;            video_pts = (double)video_st->pts.val * video_st->time_base.num /&#xA;                        video_st->time_base.den;&#xA;        else&#xA;            video_pts = 0.0;&#xA;&#xA;        if ((!audio_st || audio_pts >= STREAM_DURATION) &amp;&amp;&#xA;            (!video_st || video_pts >= STREAM_DURATION))&#xA;            break;&#xA;&#xA;        /* write interleaved audio and video frames */&#xA;        if (!video_st || (video_st &amp;&amp; audio_st &amp;&amp; audio_pts &lt; video_pts)) {&#xA;            write_audio_frame(oc, audio_st);&#xA;        } else {&#xA;            write_video_frame(oc, video_st);&#xA;            frame->pts &#x2B;= av_rescale_q(1, video_st->codec->time_base, video_st->time_base);&#xA;        }&#xA;    }&#xA;&#xA;    /* Write the trailer, if any. The trailer must be written before you&#xA;     * close the CodecContexts open when you wrote the header; otherwise&#xA;     * av_write_trailer() may try to use memory that was freed on&#xA;     * av_codec_close(). */&#xA;    av_write_trailer(oc);&#xA;&#xA;    /* Close each codec. */&#xA;    if (video_st)&#xA;        close_video(oc, video_st);&#xA;    if (audio_st)&#xA;        close_audio(oc, audio_st);&#xA;&#xA;    if (!(fmt->flags &amp; AVFMT_NOFILE))&#xA;        /* Close the output file. */&#xA;        avio_close(oc->pb);&#xA;&#xA;    /* free the stream */&#xA;    avformat_free_context(oc);&#xA;&#xA;    return 0;&#xA;}&#xA;

    &#xA;

  • FFMPEG RTSP Server using muxing doc example

    11 novembre 2018, par Harshil Makwana

    I am trying to develop RTSP server using FFMPEG. For that I slightly modified muxing file located at doc/example/ folder inside FFMPEG repository.

    Giving my source code of RTSP server example :

    #include
    #include
    #include
    #include

    #include <libavutil></libavutil>avassert.h>
    #include <libavutil></libavutil>channel_layout.h>
    #include <libavutil></libavutil>opt.h>
    #include <libavutil></libavutil>mathematics.h>
    #include <libavutil></libavutil>timestamp.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libswscale></libswscale>swscale.h>
    #include <libswresample></libswresample>swresample.h>

    #define STREAM_DURATION   10.0
    #define STREAM_FRAME_RATE 25 /* 25 images/s */
    #define STREAM_PIX_FMT    AV_PIX_FMT_YUV420P /* default pix_fmt */

    #define SCALE_FLAGS SWS_BICUBIC

    // a wrapper around a single output AVStream
    typedef struct OutputStream {
       AVStream *st;
       AVCodecContext *enc;

       /* pts of the next frame that will be generated */
       int64_t next_pts;
       int samples_count;

       AVFrame *frame;
       AVFrame *tmp_frame;

       float t, tincr, tincr2;

       struct SwsContext *sws_ctx;
       struct SwrContext *swr_ctx;
    } OutputStream;

    static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt)
    {
       AVRational *time_base = &amp;fmt_ctx->streams[pkt->stream_index]->time_base;

       printf("pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\n",
              av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base),
              av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base),
              av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base),
              pkt->stream_index);
    }

    static int write_frame(AVFormatContext *fmt_ctx, const AVRational *time_base, AVStream *st, AVPacket *pkt)
    {
       /* rescale output packet timestamp values from codec to stream timebase */
       av_packet_rescale_ts(pkt, *time_base, st->time_base);
       pkt->stream_index = st->index;

       /* Write the compressed frame to the media file. */
       log_packet(fmt_ctx, pkt);
       return av_interleaved_write_frame(fmt_ctx, pkt);
    }

    /* Add an output stream. */
    static void add_stream(OutputStream *ost, AVFormatContext *oc,
                          AVCodec **codec,
                          enum AVCodecID codec_id)
    {
       AVCodecContext *c;
       int i;

       /* find the encoder */
       *codec = avcodec_find_encoder(codec_id);
       if (!(*codec)) {
           fprintf(stderr, "Could not find encoder for '%s'\n",
                   avcodec_get_name(codec_id));
           exit(1);
       }

       ost->st = avformat_new_stream(oc, NULL);
       if (!ost->st) {
           fprintf(stderr, "Could not allocate stream\n");
           exit(1);
       }
       ost->st->id = oc->nb_streams-1;
       c = avcodec_alloc_context3(*codec);
       if (!c) {
           fprintf(stderr, "Could not alloc an encoding context\n");
           exit(1);
       }
       ost->enc = c;

       switch ((*codec)->type) {
       case AVMEDIA_TYPE_AUDIO:
           c->sample_fmt  = (*codec)->sample_fmts ?
               (*codec)->sample_fmts[0] : AV_SAMPLE_FMT_FLTP;
           c->bit_rate    = 64000;
           c->sample_rate = 44100;
           if ((*codec)->supported_samplerates) {
               c->sample_rate = (*codec)->supported_samplerates[0];
               for (i = 0; (*codec)->supported_samplerates[i]; i++) {
                   if ((*codec)->supported_samplerates[i] == 44100)
                       c->sample_rate = 44100;
               }
           }
           c->channels        = av_get_channel_layout_nb_channels(c->channel_layout);
           c->channel_layout = AV_CH_LAYOUT_STEREO;
           if ((*codec)->channel_layouts) {
               c->channel_layout = (*codec)->channel_layouts[0];
               for (i = 0; (*codec)->channel_layouts[i]; i++) {
                   if ((*codec)->channel_layouts[i] == AV_CH_LAYOUT_STEREO)
                       c->channel_layout = AV_CH_LAYOUT_STEREO;
               }
           }
           c->channels        = av_get_channel_layout_nb_channels(c->channel_layout);
           ost->st->time_base = (AVRational){ 1, c->sample_rate };
           break;

       case AVMEDIA_TYPE_VIDEO:
           c->codec_id = codec_id;

           c->bit_rate = 400000;
           /* Resolution must be a multiple of two. */
           c->width    = 352;
           c->height   = 288;
           /* timebase: This is the fundamental unit of time (in seconds) in terms
            * of which frame timestamps are represented. For fixed-fps content,
            * timebase should be 1/framerate and timestamp increments should be
            * identical to 1. */
           ost->st->time_base = (AVRational){ 1, STREAM_FRAME_RATE };
           c->time_base       = ost->st->time_base;

           c->gop_size      = 12; /* emit one intra frame every twelve frames at most */
           c->pix_fmt       = STREAM_PIX_FMT;
           if (c->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
               /* just for testing, we also add B-frames */
               c->max_b_frames = 2;
           }
           if (c->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
               /* Needed to avoid using macroblocks in which some coeffs overflow.
                * This does not happen with normal video, it just happens here as
                * the motion of the chroma plane does not match the luma plane. */
               c->mb_decision = 2;
           }
      break;

       default:
           break;
       }

       /* Some formats want stream headers to be separate. */
       if (oc->oformat->flags &amp; AVFMT_GLOBALHEADER)
           c->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
    }

    /**************************************************************/
    /* audio output */

    static AVFrame *alloc_audio_frame(enum AVSampleFormat sample_fmt,
                                     uint64_t channel_layout,
                                     int sample_rate, int nb_samples)
    {
       AVFrame *frame = av_frame_alloc();
       int ret;

       if (!frame) {
           fprintf(stderr, "Error allocating an audio frame\n");
           exit(1);
       }

       frame->format = sample_fmt;
       frame->channel_layout = channel_layout;
       frame->sample_rate = sample_rate;
       frame->nb_samples = nb_samples;

       if (nb_samples) {
           ret = av_frame_get_buffer(frame, 0);
           if (ret &lt; 0) {
               fprintf(stderr, "Error allocating an audio buffer\n");
               exit(1);
           }
       }

       return frame;
    }

    static void open_audio(AVFormatContext *oc, AVCodec *codec, OutputStream *ost, AVDictionary *opt_arg)
    {
       AVCodecContext *c;
       int nb_samples;
       int ret;
      AVDictionary *opt = NULL;

       c = ost->enc;

       /* open it */
       av_dict_copy(&amp;opt, opt_arg, 0);
       ret = avcodec_open2(c, codec, &amp;opt);
       av_dict_free(&amp;opt);
       if (ret &lt; 0) {
           fprintf(stderr, "Could not open audio codec: %s\n", av_err2str(ret));
           exit(1);
       }

       /* init signal generator */
       ost->t     = 0;
       ost->tincr = 2 * M_PI * 110.0 / c->sample_rate;
       /* increment frequency by 110 Hz per second */
       ost->tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;

       if (c->codec->capabilities &amp; AV_CODEC_CAP_VARIABLE_FRAME_SIZE)
           nb_samples = 10000;
       else
           nb_samples = c->frame_size;

       ost->frame     = alloc_audio_frame(c->sample_fmt, c->channel_layout,
                                          c->sample_rate, nb_samples);
       ost->tmp_frame = alloc_audio_frame(AV_SAMPLE_FMT_S16, c->channel_layout,
                                          c->sample_rate, nb_samples);

       /* copy the stream parameters to the muxer */
       ret = avcodec_parameters_from_context(ost->st->codecpar, c);
       if (ret &lt; 0) {
           fprintf(stderr, "Could not copy the stream parameters\n");
           exit(1);
       }

       /* create resampler context */
           ost->swr_ctx = swr_alloc();
           if (!ost->swr_ctx) {
               fprintf(stderr, "Could not allocate resampler context\n");
               exit(1);
           }

           /* set options */
           av_opt_set_int       (ost->swr_ctx, "in_channel_count",   c->channels,       0);
           av_opt_set_int       (ost->swr_ctx, "in_sample_rate",     c->sample_rate,    0);
           av_opt_set_sample_fmt(ost->swr_ctx, "in_sample_fmt",      AV_SAMPLE_FMT_S16, 0);
           av_opt_set_int       (ost->swr_ctx, "out_channel_count",  c->channels,       0);
           av_opt_set_int       (ost->swr_ctx, "out_sample_rate",    c->sample_rate,    0);
           av_opt_set_sample_fmt(ost->swr_ctx, "out_sample_fmt",     c->sample_fmt,     0);

           /* initialize the resampling context */
           if ((ret = swr_init(ost->swr_ctx)) &lt; 0) {
               fprintf(stderr, "Failed to initialize the resampling context\n");
               exit(1);
           }
    }

    /* Prepare a 16 bit dummy audio frame of 'frame_size' samples and
    * 'nb_channels' channels. */
    static AVFrame *get_audio_frame(OutputStream *ost)
    {
       AVFrame *frame = ost->tmp_frame;
       int j, i, v;
       int16_t *q = (int16_t*)frame->data[0];

       /* check if we want to generate more frames */
       if (av_compare_ts(ost->next_pts, ost->enc->time_base,
                         STREAM_DURATION, (AVRational){ 1, 1 }) >= 0)
           return NULL;

       for (j = 0; j nb_samples; j++) {
           v = (int)(sin(ost->t) * 10000);
           for (i = 0; i &lt; ost->enc->channels; i++)
               *q++ = v;
           ost->t     += ost->tincr;
           ost->tincr += ost->tincr2;
       }

       frame->pts = ost->next_pts;
       ost->next_pts  += frame->nb_samples;

       return frame;
    }

    /*
    * encode one audio frame and send it to the muxer
    * return 1 when encoding is finished, 0 otherwise
    */
    static int write_audio_frame(AVFormatContext *oc, OutputStream *ost)
    {
       AVCodecContext *c;
       AVPacket pkt = { 0 }; // data and size must be 0;
       AVFrame *frame;
       int ret;
       int got_packet;
       int dst_nb_samples;

       av_init_packet(&amp;pkt);
       c = ost->enc;

       frame = get_audio_frame(ost);

       if (frame) {
           /* convert samples from native format to destination codec format, using the resampler */
               /* compute destination number of samples */
               dst_nb_samples = av_rescale_rnd(swr_get_delay(ost->swr_ctx, c->sample_rate) + frame->nb_samples,
                                               c->sample_rate, c->sample_rate, AV_ROUND_UP);
               av_assert0(dst_nb_samples == frame->nb_samples);

           /* when we pass a frame to the encoder, it may keep a reference to it
            * internally;
           * make sure we do not overwrite it here
            */
           ret = av_frame_make_writable(ost->frame);
           if (ret &lt; 0)
               exit(1);

           /* convert to destination format */
           ret = swr_convert(ost->swr_ctx,
                             ost->frame->data, dst_nb_samples,
                             (const uint8_t **)frame->data, frame->nb_samples);
           if (ret &lt; 0) {
               fprintf(stderr, "Error while converting\n");
               exit(1);
           }
           frame = ost->frame;

           frame->pts = av_rescale_q(ost->samples_count, (AVRational){1, c->sample_rate}, c->time_base);
           ost->samples_count += dst_nb_samples;
       }

       ret = avcodec_encode_audio2(c, &amp;pkt, frame, &amp;got_packet);
       if (ret &lt; 0) {
           fprintf(stderr, "Error encoding audio frame: %s\n", av_err2str(ret));
           exit(1);
       }

       if (got_packet) {
           ret = write_frame(oc, &amp;c->time_base, ost->st, &amp;pkt);
           if (ret &lt; 0) {
               fprintf(stderr, "Error while writing audio frame: %s\n",
                       av_err2str(ret));
               exit(1);
           }
       }

       return (frame || got_packet) ? 0 : 1;
    }

    /**************************************************************/
    /* video output */

    static AVFrame *alloc_picture(enum AVPixelFormat pix_fmt, int width, int height)
    {
       AVFrame *picture;
       int ret;

       picture = av_frame_alloc();
       if (!picture)
           return NULL;

       picture->format = pix_fmt;
       picture->width  = width;
       picture->height = height;

       /* allocate the buffers for the frame data */
       ret = av_frame_get_buffer(picture, 32);
       if (ret &lt; 0) {
           fprintf(stderr, "Could not allocate frame data.\n");
           exit(1);
       }

       return picture;
    }

    static void open_video(AVFormatContext *oc, AVCodec *codec, OutputStream *ost, AVDictionary *opt_arg)
    {
       int ret;
       AVCodecContext *c = ost->enc;
       AVDictionary *opt = NULL;

       av_dict_copy(&amp;opt, opt_arg, 0);

       /* open the codec */
       ret = avcodec_open2(c, codec, &amp;opt);
       av_dict_free(&amp;opt);
       if (ret &lt; 0) {
           fprintf(stderr, "Could not open video codec: %s\n", av_err2str(ret));
           exit(1);
       }

       /* allocate and init a re-usable frame */
       ost->frame = alloc_picture(c->pix_fmt, c->width, c->height);
       if (!ost->frame) {
           fprintf(stderr, "Could not allocate video frame\n");
           exit(1);
       }

       /* If the output format is not YUV420P, then a temporary YUV420P
        * picture is needed too. It is then converted to the required
        * output format. */
       ost->tmp_frame = NULL;
       if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
           ost->tmp_frame = alloc_picture(AV_PIX_FMT_YUV420P, c->width, c->height);
           if (!ost->tmp_frame) {
               fprintf(stderr, "Could not allocate temporary picture\n");
               exit(1);
           }
       }

       /* copy the stream parameters to the muxer */
       ret = avcodec_parameters_from_context(ost->st->codecpar, c);
       if (ret &lt; 0) {
           fprintf(stderr, "Could not copy the stream parameters\n");
           exit(1);
       }
    }

    /* Prepare a dummy image. */
    static void fill_yuv_image(AVFrame *pict, int frame_index,
                              int width, int height)
    {
       int x, y, i;

       i = frame_index;

       /* Y */
       for (y = 0; y &lt; height; y++)
           for (x = 0; x &lt; width; x++)
               pict->data[0][y * pict->linesize[0] + x] = x + y + i * 3;

       /* Cb and Cr */
       for (y = 0; y &lt; height / 2; y++) {
           for (x = 0; x &lt; width / 2; x++) {
               pict->data[1][y * pict->linesize[1] + x] = 128 + y + i * 2;
               pict->data[2][y * pict->linesize[2] + x] = 64 + x + i * 5;
           }
       }
    }

    static AVFrame *get_video_frame(OutputStream *ost)
    {
       AVCodecContext *c = ost->enc;

       /* check if we want to generate more frames */
       if (av_compare_ts(ost->next_pts, c->time_base,
                         STREAM_DURATION, (AVRational){ 1, 1 }) >= 0)
           return NULL;

       /* when we pass a frame to the encoder, it may keep a reference to it
        * internally; make sure we do not overwrite it here */
       if (av_frame_make_writable(ost->frame) &lt; 0)
           exit(1);

       if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
           /* as we only generate a YUV420P picture, we must convert it
            * to the codec pixel format if needed */
           if (!ost->sws_ctx) {
               ost->sws_ctx = sws_getContext(c->width, c->height,
                                             AV_PIX_FMT_YUV420P,
                                             c->width, c->height,
                                             c->pix_fmt,
                                             SCALE_FLAGS, NULL, NULL, NULL);
               if (!ost->sws_ctx) {
                   fprintf(stderr,
                           "Could not initialize the conversion context\n");
                   exit(1);
               }
           }
           fill_yuv_image(ost->tmp_frame, ost->next_pts, c->width, c->height);
           sws_scale(ost->sws_ctx,
                     (const uint8_t * const *)ost->tmp_frame->data, ost->tmp_frame->linesize,
                     0, c->height, ost->frame->data, ost->frame->linesize);
       } else {
           fill_yuv_image(ost->frame, ost->next_pts, c->width, c->height);
       }

       ost->frame->pts = ost->next_pts++;

       return ost->frame;
    }

    /*
    * encode one video frame and send it to the muxer
    * return 1 when encoding is finished, 0 otherwise
    */
    static int write_video_frame(AVFormatContext *oc, OutputStream *ost)
    {
       int ret;
       AVCodecContext *c;
       AVFrame *frame;
       int got_packet = 0;
       AVPacket pkt = { 0 };

       c = ost->enc;

       frame = get_video_frame(ost);

       av_init_packet(&amp;pkt);

       /* encode the image */
       ret = avcodec_encode_video2(c, &amp;pkt, frame, &amp;got_packet);
       if (ret &lt; 0) {
         fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret));
           exit(1);
       }

       if (got_packet) {
           ret = write_frame(oc, &amp;c->time_base, ost->st, &amp;pkt);
       } else {
           ret = 0;
       }

       if (ret &lt; 0) {
           fprintf(stderr, "Error while writing video frame: %s\n", av_err2str(ret));
           exit(1);
       }

       return (frame || got_packet) ? 0 : 1;
    }

    static void close_stream(AVFormatContext *oc, OutputStream *ost)
    {
       avcodec_free_context(&amp;ost->enc);
       av_frame_free(&amp;ost->frame);
       av_frame_free(&amp;ost->tmp_frame);
       sws_freeContext(ost->sws_ctx);
       swr_free(&amp;ost->swr_ctx);
    }

    /**************************************************************/
    /* media file output */

    int main(int argc, char **argv)
    {
       OutputStream video_st = { 0 }, audio_st = { 0 };
       const char *filename;
       AVOutputFormat *fmt;
       AVFormatContext *oc;
       AVCodec *audio_codec, *video_codec;
       int ret;
       int have_video = 0, have_audio = 0;
       int encode_video = 0, encode_audio = 0;
       AVDictionary *opt = NULL;
       int i;

       /* Initialize libavcodec, and register all codecs and formats. */
       av_register_all();
       avformat_network_init();
       if (argc &lt; 2) {
           printf("usage: %s output_file\n"
                  "API example program to output a media file with libavformat.\n"
                  "This program generates a synthetic audio and video stream, encodes and\n"
                  "muxes them into a file named output_file.\n"
                  "The output format is automatically guessed according to the file extension.\n"
                  "Raw images can also be output by using '%%d' in the filename.\n"
                  "\n", argv[0]);
           return 1;
       }

       filename = argv[1];
       for (i = 2; i+1 &lt; argc; i+=2) {
           if (!strcmp(argv[i], "-flags") || !strcmp(argv[i], "-fflags"))
               av_dict_set(&amp;opt, argv[i]+1, argv[i+1], 0);
       }
      /* allocate the output media context */
       avformat_alloc_output_context2(&amp;oc, NULL, "rtsp", filename);
       if (!oc) {
           printf("Could not deduce output format from file extension: using MPEG.\n");
           avformat_alloc_output_context2(&amp;oc, NULL, "mpeg", filename);
       }
       if (!oc)
           return 1;

       fmt = oc->oformat;

       /* Add the audio and video streams using the default format codecs
        * and initialize the codecs. */
       if (fmt->video_codec != AV_CODEC_ID_NONE) {
           add_stream(&amp;video_st, oc, &amp;video_codec, fmt->video_codec);
           have_video = 1;
           encode_video = 1;
       }
       if (fmt->audio_codec != AV_CODEC_ID_NONE) {
           add_stream(&amp;audio_st, oc, &amp;audio_codec, fmt->audio_codec);
           have_audio = 1;
           encode_audio = 1;
       }

       /* Now that all the parameters are set, we can open the audio and
        * video codecs and allocate the necessary encode buffers. */
       if (have_video)
           open_video(oc, video_codec, &amp;video_st, opt);

       if (have_audio)
           open_audio(oc, audio_codec, &amp;audio_st, opt);

       av_dump_format(oc, 0, filename, 1);

       /* open the output file, if needed */
       if (!(fmt->flags &amp; AVFMT_NOFILE)) {
           ret = avio_open(&amp;oc->pb, filename, AVIO_FLAG_WRITE);
           if (ret &lt; 0) {
               fprintf(stderr, "Could not open '%s': %s\n", filename,
                       av_err2str(ret));
               return 1;
           }
       }

       /* Write the stream header, if any. */
       ret = avformat_write_header(oc, &amp;opt);
       if (ret &lt; 0) {
           fprintf(stderr, "Error occurred when opening output file: %s\n",
                   av_err2str(ret));
           return 1;
       }

       while (encode_video || encode_audio) {
           /* select the stream to encode */
           if (encode_video &amp;&amp;
              (!encode_audio || av_compare_ts(video_st.next_pts, video_st.enc->time_base,
                                               audio_st.next_pts, audio_st.enc->time_base) &lt;= 0)) {
               encode_video = !write_video_frame(oc, &amp;video_st);
           } else {
               encode_audio = !write_audio_frame(oc, &amp;audio_st);
           }
       }

       /* Write the trailer, if any. The trailer must be written before you
        * close the CodecContexts open when you wrote the header; otherwise
        * av_write_trailer() may try to use memory that was freed on
        * av_codec_close(). */
       av_write_trailer(oc);

       /* Close each codec. */
       if (have_video)
           close_stream(oc, &amp;video_st);
       if (have_audio)
           close_stream(oc, &amp;audio_st);

       if (!(fmt->flags &amp; AVFMT_NOFILE))
           /* Close the output file. */
           avio_closep(&amp;oc->pb);

       /* free the stream */
       avformat_free_context(oc);

       return 0;
    }

    After compiling it, I am running binary :

    $ ./muxing rtsp://127.0.0.1/test
    Output #0, rtsp, to 'rtsp://127.0.0.1/test':
       Stream #0:0: Video: mpeg4, yuv420p, 352x288, q=2-31, 400 kb/s, 25 tbn
       Stream #0:1: Audio: aac (LC), 44100 Hz, stereo, fltp, 64 kb/s
    [tcp @ 0x2b9d220] Connection to tcp://127.0.0.1:554?timeout=0 failed: Connection refused
    Error occurred when opening output file: Connection refused

    But getting Connection refused error,

  • Marketing Touchpoints : Examples, KPIs, and Best Practices

    11 mars 2024, par Erin

    The customer journey is rarely straightforward. Rather, each stage comprises numerous points of contact with your brand, known as marketing touchpoints. And each touchpoint is equally important to the customer experience. 

    This article will explore marketing touchpoints in detail, including how to analyse them with attribution models and which KPIs to track. It will also share tips on incorporating these touchpoints into your marketing strategy. 

    What are marketing touchpoints ? 

    Marketing touchpoints are the interactions that take place between brands and customers throughout the latter’s journey, either online or in person. 

    Omni-channel digital marketing illustration

    By understanding how customers interact with your brand before, during and after a purchase, you can identify the channels that contribute to starting, driving and closing buyer journeys. Not only that, but you’ll also learn how to optimise the customer experience. This can also help you : 

    • Promote customer loyalty through increased customer satisfaction
    • Improve your brand reputation and foster a more positive perception of your brand, supported by social proof 
    • Build brand awareness among prospective customers 
    • Reconnect with current customers to drive repeat business

    According to a 2023 survey, social media and video-sharing platforms are the leading digital touchpoints among US consumers.

    With the customer journey divided into three stages — awareness, consideration, and decision — we can group these interactions into three touchpoint segments, depending on whether they occur before, during or after a purchase. 

    Touchpoints before a purchase

    Touchpoints before a purchase are those initial interactions between potential customers and brands that occur during the awareness stage — before they’ve made a purchase decision. 

    Here are some key touchpoints at the pre-purchase stage : 

    • Customer reviews, forums, and testimonials 
    • Social media posts
    • Online ads 
    • Company events and product demos
    • Other digital touchpoints, like video content, blog posts, or infographics
    • Peer referral 

    In PwC’s 2024 Global Consumer Insights Pulse Survey, 54% of consumers listed search engines as their primary source of pre-purchase information, followed by Amazon (35%) and retailer websites (33%). 

    Here are the survey’s findings in Western Europe, specifically : 

    Social channels are another major pre-purchase touchpoint ; 25% of social media users aged 18 to 44 have made a purchase through a social media app over the past three months. 

    Touchpoints during a purchase

    Touchpoints during a purchase occur when the prospective customer has made their purchase decision. It’s the beginning of a (hopefully) lasting relationship with them. 

    It’s important to involve both marketing and sales teams here — and to keep track of conversion metrics

    Here are the main touchpoints at this stage : 

    • Company website pages 
    • Product pages and catalogues 
    • Communication between customers and sales reps 
    • Product packaging and labelling 
    • Point-of-sale (POS) — the final touchpoint the prospective customer will reach before making the final purchasing decision 

    Touchpoints after a purchase

    You can use touchpoints after a purchase to maintain a positive relationship and keep current customers engaged. Examples of touchpoints that contribute to a good post-purchase experience for the customer include the following : 

    • Thank-you emails 
    • Email newsletters 
    • Customer satisfaction surveys 
    • Cross-selling emails 
    • Renewal options 
    • Customer loyalty programs

    Email marketing remains significant across all touchpoint segments, with 44% of CMOs agreeing that it’s essential to their marketing strategy — and it also plays a particularly important role in the post-purchase experience. For 61.1% of marketing teams, email open rates are higher than 20%.

    Sixty-nine percent of consumers say they’ve stopped doing business with a brand following a bad experience, so the importance of customer service touchpoints shouldn’t be overlooked. Live chat, chatbots, self-service resources, and customer service teams are integral to the post-purchase experience.

    Attribution models : Assigning value to marketing touchpoints 

    Determining the most effective touchpoints — those that directly contribute to conversions — is a process known as marketing attribution. The goal here is to identify the specific channels and points of contact with prospective customers that result in revenue for the company.

    Illustration of the marketing funnel stages

    You can use these insights to understand — and maximise — marketing return on investment (ROI). Otherwise, you risk allocating your budget to the wrong channels. 

    It’s possible to group attribution models into two categories — single-touch and multi-touch — depending on whether you assign value to one or more contributing touchpoints.

    Single-touch attribution models, where you’re giving credit for the conversion to a single touchpoint, include the following :

    • First-touch attribution : This assigns credit for the conversion to the first interaction a customer had with a brand ; however, it fails to consider lower-funnel touchpoints.
    • Last-click attribution : This focuses only on bottom-of-funnel marketing and credits the last interaction the customer had with a brand before completing a purchase.
    • Last non-direct : Credits the touchpoint immediately preceding a direct touchpoint with all the credit.

    Multi-touch attribution models are more complex and distribute the credit for conversion across multiple relevant touchpoints throughout the customer journey :

    • Linear attribution : The simplest multi-touch attribution model assigns equal values to all contributing touchpoints.
    • Position-based or U-shaped attribution : This assigns the greatest value to the first and last touchpoint — with 40% of the conversion credit each — and then divides the remaining 20% across all the other touchpoints.
    • Time-decay attribution : This model assigns the most credit to the customer’s most recent interactions with a brand, assuming that the touchpoints that occur later in the journey have a bigger impact on the conversion.

    Consider the following when choosing the most appropriate attribution model for your business :

    • The length of your typical sales cycle
    • Your marketing goals : increasing awareness, lead generation, driving revenue, etc.
    • How many stages and touchpoints make up your sales funnel

    Sometimes, it even makes sense to measure marketing performance using more than one attribution model.

    With the sheer volume of data that’s constantly generated across numerous online touchpoints, from your website to social media channels, it’s practically impossible to collect and analyse it manually.

    You’ll need an advanced web analytics platform to identify key touchpoints and assign value to them.

    Matomo’s Marketing Attribution feature can accurately measure the performance of different touchpoints to ensure that you’re allocating resources to the right channels. This is done in a compliant manner, without the need of data sampling or requiring cookie consent screens (excluding in Germany and the UK), ensuring both accuracy and privacy compliance.

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    Customer journey KPIs for measuring marketing campaign performance 

    Measuring the impact of different touchpoints on marketing campaign performance can help you understand how customer interactions drive conversions — and how to optimise your future efforts. 

    Illustration of customer journey concept

    Clearly, this is not a one-time effort. You should continuously reevaluate the crucial touchpoints that drive the most engagement at different stages of the customer journey. 

    Web analytics platforms can provide valuable insights into ever-changing consumer behaviours and trends and help you make informed decisions. 

    At the moment, Google is the most popular solution in the web analytics industry, with a combined market share of more than 70%

    However, if privacy, data accuracy, and GDPR compliance are a priority for you, Matomo is an alternative worth considering

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    KPIs to track before a purchase 

    During the pre-purchase stage, focus on the KPIs that measure the effectiveness of marketing activities across various online touchpoints — landing pages, email campaigns, social channels and ad placement on SERPs, for instance. 

    KPIs to track during the consideration stage include the following : 

    • Cost-per-click (CPC) : The CPC, the total cost of paid online advertising divided by the number of clicks those ads get, indicates whether you’re getting a good ROI. In the UK, the average CPC for search advertising is $1.22. Globally, it averages $0.62.
    • Engagement rate : The engagement rate, which is the total number of interactions divided by the number of followers, is useful for measuring the performance of social media touchpoints. Customer engagement also applies to other channels, like tracking average time on-page, form conversions, bounce rates, and other website interactions. 
    • Click-through rate (CTR) : The CTR — or the number of clicks your ads receive compared to the number of times they’re shown — helps you measure the performance of CTAs, email newsletters and pay-per-click (PPC) advertising.

    KPIs to track during a purchase 

    As a potential customer moves further down the sales funnel and reaches the decision stage, where they’re ready to make the choice to purchase, you should be tracking the following : 

    • Conversion rate : This is the percentage of leads that convert into customers by completing the desired action relative to the total number of website visitors. It shows you whether you’re targeting the right people and providing a frictionless checkout experience.
    • Sales revenue : This refers to the quantity of products sold multiplied by the product’s price. It helps you track the company’s ability to generate profit. 
    • Cost per conversion : This KPI is the total cost of online advertising in relation to the number of conversions. It measures the effectiveness of different marketing channels and the costs of converting prospective customers into buyers. It also forecasts future ad spend.

    KPIs to track after purchase 

    At the post-purchase stage, your priority should be gathering feedback : 

    Customer feedback surveys are great for collecting insights into customers’ post-purchase experience, opinions about your brand, products and services, and needs and expectations. 

    In addition to measuring customer satisfaction, these insights can help you identify points of friction, forecast future growth and revenue and spot customers at risk of churning. 

    Focus on the following customer satisfaction and retention metrics : 

    • Customer Satisfaction Score (CSAT) : This metric, which is gathered through customer satisfaction surveys, helps you gauge satisfaction levels. After all, 77% of consumers consider great customer service an important driver of brand loyalty.
    • Net Promoter Score (NPS) : Based on single-question customer surveys, NPS indicates how likely a customer is to recommend your business.
    • Customer Lifetime Value (CLV) : The CLV is the profit you can expect to generate from one customer throughout their relationship with your company. 
    • Customer Health Score (CHS) : This score can assess how “healthy” the customer’s relationship with your brand is and identify at-risk customers.

    Marketing touchpoints : Tips and best practices 

    Customer experience is more important today than ever. 

    Illustration of marketing funnel optimisation

    Salesforce’s 2022 State of the Connected Consumer report indicated that, for 88% of customers, the experience the brand provides is just as important as the product itself. 

    Here’s how you can build your customer touchpoint strategy and use effective touchpoints to improve customer satisfaction, build a loyal customer base, deliver better digital experiences and drive growth : 

    Understand the customer’s end-to-end experience 

    The typical customer’s journey follows a non-linear path of individual experiences that shape their awareness and brand preference. 

    Seventy-three percent of customers expect brands to understand their needs. So, personalising each interaction and delivering targeted content at different touchpoint segments — supported by customer segmentation and tools like Matomo — should be a priority. 

    Try to put yourself in the prospective customer’s shoes and understand their motivation and needs, focusing on their end-to-end experience rather than individual interactions. 

    Create a customer journey map 

    Once you understand how prospective customers interact with your brand, it becomes easier to map their journey from the pre-purchase stage to the actual purchase and beyond. 

    By creating these visual “roadmaps,” you make sure that you’re delivering the right content on the right channels at the right times and to the right audience — the key to successful marketing.

    Identify best-performing digital touchpoints 

    You can use insights from marketing attribution to pinpoint areas that are performing well. 

    By analysing the data provided by Matomo’s Marketing Attribution feature, you can determine which digital touchpoints are driving the most conversions or engagement, allowing you to focus your resources on optimising these channels for even greater success. 

    This targeted approach helps maximise the effectiveness of your marketing efforts and ensures a higher return on investment.

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    Discover key marketing touchpoints with Matomo 

    The customer’s journey rarely follows a direct route. If you hope to reach more customers and improve their experience, you’ll need to identify and manage individual marketing touchpoints every step of the way.

    While this process looks different for every business, it’s important to remember that your customers’ experience begins long before they interact with your brand for the first time — and carries on long after they complete the purchase. 

    In order to find these touchpoints and measure their effectiveness across multiple marketing channels, you’ll have to rely on accurate data — and a powerful web analytics tool like Matomo can provide those valuable marketing insights. 

    Try Matomo free for 21-days. No credit card required.