Recherche avancée

Médias (91)

Autres articles (68)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (7578)

  • A Complete Guide to Metrics in Google Analytics

    11 janvier 2024, par Erin

    There’s no denying that Google Analytics is the most popular web analytics solution today. Many marketers choose it to understand user behaviour. But when it offers so many different types of metrics, it can be overwhelming to choose which ones to focus on. In this article, we’ll dive into how metrics work in Google Analytics 4 and how to decide which metrics may be most useful to you, depending on your analytics needs.

    However, there are alternative web analytics solutions that can provide more accurate data and supplement GA’s existing features. Keep reading to learn how to overcome Google Analytics limitations so you can get the more out of your web analytics.

    What is a metric in Google Analytics ?

    In Google Analytics, a metric is a quantitative measurement or numerical data that provides insights into specific aspects of user behaviour. Metrics represent the counts or sums of user interactions, events or other data points. You can use GA metrics to better understand how people engage with a website or mobile app. 

    Unlike the previous Universal Analytics (the previous version of GA), GA4 is event-centric and has automated and simplified the event tracking process. Compared to Universal Analytics, GA4 is more user-centric and lets you hone in on individual user journeys. Some examples of common key metrics in GA4 are : 

    • Sessions : A group of user interactions on your website that occur within a specific time period. A session concludes when there is no user activity for 30 minutes.
    • Total Users : The cumulative count of individuals who accessed your site within a specified date range.
    • Engagement Rate : The percentage of visits to your website or app that included engagement (e.g., one more pageview, one or more conversion, etc.), determined by dividing engaged sessions by sessions.
    Main overview dashboard in GA4 displaying metrics

    Metrics are invaluable when it comes to website and conversion optimisation. Whether you’re on the marketing team, creating content or designing web pages, understanding how your users interact with your digital platforms is essential.

    GA4 metrics vs. dimensions

    GA4 uses metrics to discuss quantitative measurements and dimensions as qualitative descriptors that provide additional context to metrics. To make things crystal clear, here are some examples of how metrics and dimensions are used together : 

    • “Session duration” = metric, “device type” = dimension 
      • In this situation, the dimension can segment the data by device type so you can optimise the user experience for different devices.
    • “Bounce rate” = metric, “traffic source/medium” = dimension 
      • Here, the dimension helps you segment by traffic source to understand how different acquisition channels are performing. 
    • “Conversion rate” = metric, “Landing page” = dimension 
      • When the conversion rate data is segmented by landing page, you can better see the most effective landing pages. 

    You can get into the nitty gritty of granular analysis by combining metrics and dimensions to better understand specific user interactions.

    How do Google Analytics metrics work ?

    Before diving into the most important metrics you should track, let’s review how metrics in GA4 work. 

    GA4 overview dashboard of engagement metrics
    1. Tracking code implementation

    The process begins with implementing Google Analytics 4 tracking code into the HTML of web pages. This tracking code is JavaScript added to each website page — it collects data related to user interactions, events and other important tidbits.

    1. Data collection

    As users interact with the website or app, the Google Analytics 4 tracking code captures various data points (i.e., page views, clicks, form submissions, custom events, etc.). This raw data is compiled and sent to Google Analytics servers for processing.

    1. Data processing algorithms

    When the data reaches Google Analytics servers, data processing algorithms come into play. These algorithms analyse the incoming raw data to identify the dataset’s trends, relationships and patterns. This part of the process involves cleaning and organising the data.

    1. Segmentation and customisation

    As discussed in the previous section, Google Analytics 4 allows for segmentation and customisation of data with dimensions. To analyse specific data groups, you can define segments based on various dimensions (e.g., traffic source, device type). Custom events and user properties can also be defined to tailor the tracking to the unique needs of your website or app.

    1. Report generation

    Google Analytics 4 can make comprehensive reports and dashboards based on the processed and segmented data. These reports, often in the form of graphs and charts, help identify patterns and trends in the data.

    What are the most important Google Analytics metrics to track ? 

    In this section, we’ll identify and define key metrics for marketing teams to track in Google Analytics 4. 

    1. Pageviews are the total number of times a specific page or screen on your website or app is viewed by visitors. Pageviews are calculated each time a web page is loaded or reloaded in a browser. You can use this metric to measure the popularity of certain content on your website and what users are interested in. 
    2. Event tracking monitors user interactions with content on a website or app (i.e., clicks, downloads, video views, etc.). Event tracking provides detailed insights into user engagement so you can better understand how users interact with dynamic content. 
    3. Retention rate can be analysed with a pre-made overview report that Google Analytics 4 provides. This user metric measures the percentage of visitors who return to your website or app after their first visit within a specific time period. Retention rate = (users with subsequent visits / total users in the initial cohort) x 100. Use this information to understand how relevant or effective your content, user experience and marketing efforts are in retaining visitors. You probably have more loyal/returning buyers if you have a high retention rate. 
    4. Average session duration calculates the average time users spend on your website or app per session. Average session duration = total duration of all sessions / # of sessions. A high average session duration indicates how interested and engaged users are with your content. 
    5. Site searches and search queries on your website are automatically tracked by Google Analytics 4. These metrics include search terms, number of searches and user engagement post-search. You can use site search metrics to better understand user intent and refine content based on users’ searches. 
    6. Entrance and exit pages show where users first enter and leave your site. This metric is calculated by the percentage of sessions that start or end on a specific page. Knowing where users are entering and leaving your site can help identify places for content optimisation. 
    7. Device and browser info includes data about which devices and browsers websites or apps visitors use. This is another metric that Google Analytics 4 automatically collects and categorises during user sessions. You can use this data to improve the user experience on relevant devices and browsers. 
    8. Bounce rate is the percentage of single-page sessions where users leave your site or app without interacting further. Bounce rate = (# of single-page sessions / total # of sessions) x 100. Bounce rate is useful for determining how effective your landing pages are — pages with high bounce rates can be tweaked and optimised to enhance user engagement.

    Examples of how Matomo can elevate your web analytics

    Although Google Analytics is a powerful tool for understanding user behaviour, it also has privacy concerns, limitations and a list of issues. Another web analytics solution like Matomo can help fill those gaps so you can get the most out of your analytics.

    Examples of how Matomo and GA4 can elevate each other
    1. Cross-verify and validate your observations from Google Analytics by comparing data from Matomo’s Heatmaps and Session Recordings for the same pages. This process grants you access to these advanced features that GA4 does not offer.
    Matomo's heatmaps feature
    1. Matomo provides you with greater accuracy thanks to its privacy-friendly design. Unlike GA4, Matomo can be configured to operate without cookies. This means increased accuracy without intrusive cookie consent screens interrupting the user experience. It’s a win for you and for your users. Matomo also doesn’t apply data sampling so you can rest assured that the data you see is 100% accurate.
    1. Unlike GA4, Matomo offers direct access to customer support so you can save time sifting through community forum threads and online documentation. Gain personalised assistance and guidance for your analytics questions, and resolve issues efficiently.
    Screenshot of the Form Analytics Dashboard, showing data and insights on form usage and performance
    1. Matomo’s Form Analytics and Media Analytics extend your analytics capabilities beyond just pageviews and event tracking.

      Tracking user interactions with forms can tell you which fields users struggle with, common drop-off points, in addition to which parts of the form successfully guide visitors towards submission.

      See first-hand how Concrete CMS 3x their leads using Matomo’s Form Analytics.

      Media Analytics can provide insight into how users interact with image, video, or audio content on your website. You can use this feature to assess the relevance and popularity of specific content by knowing what your audience is engaged by.

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    Final thoughts

    Although Google Analytics is a powerful tool on its own, Matomo can elevate your web analytics by offering advanced features, data accuracy and a privacy-friendly design. Don’t play a guessing game with your data — Matomo provides 100% accurate data so you don’t have to rely on AI or machine learning to fill in the gaps. Matomo can be configured cookieless which also provides you with more accurate data and a better user experience. 

    Lastly, Matomo is fully compliant with some of the world’s strictest privacy regulations like GPDR. You won’t have to sacrifice compliance for accurate, high quality data. 

    Start your 21-day free trial of Matomo — no credit card required.

  • Ffmpeg error "avcodec_send_frame" return "invalid argument"

    17 octobre 2023, par Paulo Coutinho

    I have a problem in function avcodec_send_frame throwing error Error sending frame for encoding: Invalid argument (-22). I already search, check, recheck and nothing. It is near the ffmpeg examples. Can anyone help me ? Thanks.

    


    This is my code :

    


    static void callbackAddSubtitle(const Message &m, const Response r)
{
    try
    {
        av_log_set_level(AV_LOG_DEBUG);

        spdlog::debug("[Mapping :: callbackAddSubtitle] Adding subtitle...");

        auto inputOpt = m.get("input");
        auto outputOpt = m.get("output");

        if (!inputOpt.has_value() || !outputOpt.has_value())
        {
            r(std::string{"INVALID-PARAMS"});
            return;
        }

        const std::string &input = inputOpt.value();
        const std::string &output = outputOpt.value();

        // initialize input
        spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing input video...");

        AVFormatContext *inputFormatCtx = avformat_alloc_context();
        if (avformat_open_input(&inputFormatCtx, input.c_str(), nullptr, nullptr) != 0)
        {
            spdlog::error("Failed to open input");
            r(std::string{"ERROR-OPEN-INPUT"});
            return;
        }

        if (avformat_find_stream_info(inputFormatCtx, nullptr) < 0)
        {
            spdlog::error("Failed to find stream information");
            avformat_close_input(&inputFormatCtx);
            r(std::string{"ERROR-FIND-STREAM"});
            return;
        }

        int videoStreamIndex = av_find_best_stream(inputFormatCtx, AVMEDIA_TYPE_VIDEO, -1, -1, nullptr, 0);
        if (videoStreamIndex < 0)
        {
            spdlog::error("Could not find a video stream");
            r(std::string{"ERROR-FIND-VIDEO-STREAM"});
            return;
        }

        AVRational timeBase = inputFormatCtx->streams[videoStreamIndex]->time_base;

        AVCodecParameters *inputCodecPar = inputFormatCtx->streams[videoStreamIndex]->codecpar;
        const AVCodec *inputCodec = avcodec_find_decoder(inputCodecPar->codec_id);
        AVCodecContext *inputCodecCtx = avcodec_alloc_context3(inputCodec);

        avcodec_parameters_to_context(inputCodecCtx, inputCodecPar);
        avcodec_open2(inputCodecCtx, inputCodec, nullptr);

        // initialize input audio
        spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing input audio...");

        int audioStreamIndex = av_find_best_stream(inputFormatCtx, AVMEDIA_TYPE_AUDIO, -1, -1, nullptr, 0);
        if (audioStreamIndex < 0)
        {
            spdlog::error("Could not find an audio stream");
            r(std::string{"ERROR-FIND-AUDIO-STREAM"});
            return;
        }

        AVCodecParameters *inputAudioCodecPar = inputFormatCtx->streams[audioStreamIndex]->codecpar;
        const AVCodec *inputAudioCodec = avcodec_find_decoder(inputAudioCodecPar->codec_id);
        AVCodecContext *inputAudioCodecCtx = avcodec_alloc_context3(inputAudioCodec);

        avcodec_parameters_to_context(inputAudioCodecCtx, inputAudioCodecPar);
        avcodec_open2(inputAudioCodecCtx, inputAudioCodec, nullptr);

        // initialize output video
        spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing output video...");

        AVFormatContext *outputFormatCtx = nullptr;
        avformat_alloc_output_context2(&outputFormatCtx, nullptr, nullptr, output.c_str());
        AVStream *outputStream = avformat_new_stream(outputFormatCtx, nullptr);

        AVCodecContext *outputCodecCtx = avcodec_alloc_context3(inputCodec);
        avcodec_parameters_to_context(outputCodecCtx, inputCodecPar);
        int retOutVideo = avcodec_open2(outputCodecCtx, inputCodec, nullptr);

        if (retOutVideo < 0)
        {
            char err[AV_ERROR_MAX_STRING_SIZE];
            av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retOutVideo);
            spdlog::error("Failed to initialize output video: {}", err);
            r(std::string{"ERROR-INIT-OUTPUT-VIDEO"});
            return;
        }

        outputStream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
        outputStream->codecpar->codec_id = inputCodec->id;
        avcodec_parameters_from_context(outputStream->codecpar, outputCodecCtx);

        if (!(outputFormatCtx->oformat->flags & AVFMT_NOFILE))
        {
            avio_open(&outputFormatCtx->pb, output.c_str(), AVIO_FLAG_WRITE);
        }

        const char *pixelFormatName = getPixelFormatName(outputCodecCtx->pix_fmt);
        spdlog::debug("Pixel Format: {}", pixelFormatName);

        // initialize output audio
        spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing output audio...");

        AVStream *outputAudioStream = avformat_new_stream(outputFormatCtx, nullptr);
        AVCodecContext *outputAudioCodecCtx = avcodec_alloc_context3(inputAudioCodec);
        avcodec_parameters_to_context(outputAudioCodecCtx, inputAudioCodecPar);
        int retOutAudio = avcodec_open2(outputAudioCodecCtx, inputAudioCodec, nullptr);

        if (retOutAudio < 0)
        {
            char err[AV_ERROR_MAX_STRING_SIZE];
            av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retOutAudio);
            spdlog::error("Failed to initialize output audio: {}", err);
            r(std::string{"ERROR-INIT-OUTPUT-AUDIO"});
            return;
        }

        outputAudioStream->codecpar->codec_type = AVMEDIA_TYPE_AUDIO;
        outputAudioStream->codecpar->codec_id = inputAudioCodec->id;
        avcodec_parameters_from_context(outputAudioStream->codecpar, outputAudioCodecCtx);

        // initialize filters
        spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing filters...");

        AVFilterGraph *filterGraph = avfilter_graph_alloc();
        if (!filterGraph)
        {
            spdlog::error("Failed to allocate filter graph");
            r(std::string{"ERROR-FILTER-GRAPH"});
            return;
        }

        AVFilterContext *bufferSinkCtx;
        AVFilterContext *bufferSrcCtx;

        const AVFilter *bufferSink = avfilter_get_by_name("buffersink");
        const AVFilter *bufferSrc = avfilter_get_by_name("buffer");

        // input filter
        char filterInArgs[512];
        snprintf(filterInArgs, sizeof(filterInArgs), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d", inputCodecPar->width, inputCodecPar->height, inputCodecCtx->pix_fmt, timeBase.num, timeBase.den, inputCodecCtx->sample_aspect_ratio.num, inputCodecCtx->sample_aspect_ratio.den);

        spdlog::debug("[Mapping :: callbackAddSubtitle] Buffer src args: {}", filterInArgs);

        int retFilterIn = avfilter_graph_create_filter(&bufferSrcCtx, bufferSrc, "in", filterInArgs, nullptr, filterGraph);
        if (retFilterIn < 0)
        {
            char err[AV_ERROR_MAX_STRING_SIZE];
            av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retFilterIn);
            spdlog::error("Failed to create bufferSrcCtx: {}", err);
            r(std::string{"ERROR-CREATE-FILTER-SRC"});
            return;
        }

        // output filter
        int retFilterOut = avfilter_graph_create_filter(&bufferSinkCtx, bufferSink, "out", nullptr, nullptr, filterGraph);

        if (retFilterOut < 0)
        {
            char err[AV_ERROR_MAX_STRING_SIZE];
            av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retFilterOut);
            spdlog::error("Failed to create bufferSinkCtx: {}", err);
            r(std::string{"ERROR-CREATE-FILTER-SINK"});
            return;
        }

        enum AVPixelFormat pix_fmts[] = {AV_PIX_FMT_YUV420P, AV_PIX_FMT_NONE};
        av_opt_set_int_list(bufferSinkCtx, "pix_fmts", pix_fmts, AV_PIX_FMT_NONE, AV_OPT_SEARCH_CHILDREN);

        // add filters to graph and link them
        const char *filterSpec = "drawtext=text='Legenda Adicionada Automaticamente Via FFMPEG e C++': fontcolor=yellow: bordercolor=black: fontfile='/Users/paulo/Downloads/roboto/Roboto-Black.ttf'";
        const AVFilter *filter = avfilter_get_by_name("drawtext");

        AVFilterInOut *outputs = avfilter_inout_alloc();
        AVFilterInOut *inputs = avfilter_inout_alloc();

        outputs->name = av_strdup("in");
        outputs->filter_ctx = bufferSrcCtx;
        outputs->pad_idx = 0;
        outputs->next = nullptr;
        inputs->name = av_strdup("out");
        inputs->filter_ctx = bufferSinkCtx;
        inputs->pad_idx = 0;
        inputs->next = nullptr;

        if (avfilter_graph_parse_ptr(filterGraph, filterSpec, &inputs, &outputs, nullptr) < 0)
        {
            spdlog::error("Failed to parse filter graph");
            r(std::string{"ERROR-PARSE-FILTER"});
            return;
        }

        if (avfilter_graph_config(filterGraph, nullptr) < 0)
        {
            spdlog::error("Failed to configure filter graph");
            r(std::string{"ERROR-CONFIG-FILTER"});
            return;
        }

        // header
        spdlog::debug("[Mapping :: callbackAddSubtitle] Writing header...");

        if (avformat_write_header(outputFormatCtx, nullptr) < 0)
        {
            spdlog::error("Error writing header");
            r(std::string{"ERROR-WRITE-HEADER"});
            return;
        }

        // read frames and write to output
        AVPacket *packet = av_packet_alloc();
        AVFrame *frame = av_frame_alloc();

        frame->format = inputCodecCtx->pix_fmt;
        frame->width = inputCodecCtx->width;
        frame->height = inputCodecCtx->height;

        AVFrame *filt_frame = av_frame_alloc();

        filt_frame->format = inputCodecCtx->pix_fmt;
        filt_frame->width = inputCodecCtx->width;
        filt_frame->height = inputCodecCtx->height;

        while (av_read_frame(inputFormatCtx, packet) >= 0)
        {
            if (packet->stream_index == videoStreamIndex)
            {
                if (avcodec_send_packet(inputCodecCtx, packet) < 0)
                {
                    spdlog::error("Error sending packet for decoding");
                    r(std::string{"ERROR-SEND-PACKET-DECODE"});
                    return;
                }

                while (avcodec_receive_frame(inputCodecCtx, frame) == 0)
                {
                    // Envia o quadro decodificado para o gráfico de filtro
                    if (av_buffersrc_add_frame_flags(bufferSrcCtx, frame, AV_BUFFERSRC_FLAG_KEEP_REF) < 0)
                    {
                        spdlog::error("Error while feeding the filtergraph");
                        r(std::string{"ERROR-FEED-FILTERGRAPH"});
                        return;
                    }

                    // Recebe um quadro do gráfico de filtro
                    if (av_buffersink_get_frame(bufferSinkCtx, filt_frame) < 0)
                    {
                        spdlog::error("Error while receiving the filtered frame");
                        r(std::string{"ERROR-RECEIVE-FILTERED-FRAME"});
                        return;
                    }

                    // Envia o quadro decodificado para re-codificação
                    int retSendFrame = avcodec_send_frame(outputCodecCtx, filt_frame);
                    if (retSendFrame < 0)
                    {
                        char err[AV_ERROR_MAX_STRING_SIZE];
                        av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retSendFrame);
                        spdlog::error("Error sending frame for encoding: {}", err);
                        r(std::string{"ERROR-SEND-FRAME-ENCODE"});
                        return;
                    }

                    AVPacket *output_packet = av_packet_alloc();
                    output_packet->data = nullptr;
                    output_packet->size = 0;

                    // Re-codifica filt_frame para um pacote
                    if (avcodec_receive_packet(outputCodecCtx, output_packet) == 0)
                    {
                        // Escreve o pacote no fluxo de saída
                        av_write_frame(outputFormatCtx, output_packet);
                        av_packet_unref(output_packet);
                    }

                    av_frame_unref(filt_frame);
                }

                // time
                packet->pts = av_rescale_q_rnd(packet->pts, inputFormatCtx->streams[videoStreamIndex]->time_base, outputFormatCtx->streams[videoStreamIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
                packet->dts = av_rescale_q_rnd(packet->dts, inputFormatCtx->streams[videoStreamIndex]->time_base, outputFormatCtx->streams[videoStreamIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
                packet->duration = av_rescale_q(packet->duration, inputFormatCtx->streams[videoStreamIndex]->time_base, outputFormatCtx->streams[videoStreamIndex]->time_base);
                packet->stream_index = videoStreamIndex;

                // write packet to output video stream
                av_interleaved_write_frame(outputFormatCtx, packet);
            }
            else if (packet->stream_index == audioStreamIndex)
            {
                // rescale timestamps
                packet->pts = av_rescale_q_rnd(packet->pts, inputFormatCtx->streams[audioStreamIndex]->time_base, outputFormatCtx->streams[audioStreamIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
                packet->dts = av_rescale_q_rnd(packet->dts, inputFormatCtx->streams[audioStreamIndex]->time_base, outputFormatCtx->streams[audioStreamIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
                packet->duration = av_rescale_q(packet->duration, inputFormatCtx->streams[audioStreamIndex]->time_base, outputFormatCtx->streams[audioStreamIndex]->time_base);
                packet->stream_index = audioStreamIndex;

                // write packet to output audio stream
                av_interleaved_write_frame(outputFormatCtx, packet);
            }

            av_packet_unref(packet);
        }

        av_packet_free(&packet);
        av_frame_free(&frame);
        av_frame_free(&filt_frame);

        spdlog::debug("[Mapping :: callbackAddSubtitle] Writing trailer...");

        if (av_write_trailer(outputFormatCtx) < 0)
        {
            spdlog::error("Error writing trailer");
            r(std::string{"ERROR-WRITE-TRAILER"});
            return;
        }

        // cleanup
        spdlog::debug("[Mapping :: callbackAddSubtitle] Cleaning...");

        if (!(outputFormatCtx->oformat->flags & AVFMT_NOFILE))
        {
            avio_closep(&outputFormatCtx->pb);
        }

        avcodec_free_context(&inputCodecCtx);
        avcodec_free_context(&inputAudioCodecCtx);
        avcodec_free_context(&outputCodecCtx);
        avcodec_free_context(&outputAudioCodecCtx);

        avformat_free_context(inputFormatCtx);
        avformat_free_context(outputFormatCtx);

        r(std::string{"OK"});
    }
    catch (const std::exception &e)
    {
        spdlog::error("Error: {}", e.what());
        r(std::string{"ERROR"});
    }
}


    


    The error is :

    


    [2023-10-17 06:30:16.936] [debug] [Mapping :: callbackAddSubtitle] Adding subtitle...
[2023-10-17 06:30:16.936] [debug] [Mapping :: callbackAddSubtitle] Initializing input video...
[NULL @ 0x153604a60] Opening '/Users/paulo/Downloads/movie.mp4' for reading
[file @ 0x6000001fd170] Setting default whitelist 'file,crypto,data'
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Format mov,mp4,m4a,3gp,3g2,mj2 probed with size=2048 and score=100
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] ISO: File Type Major Brand: isom
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Unknown dref type 0x206c7275 size 12
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Processing st: 0, edit list 0 - media time: 0, duration: 2669670
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Unknown dref type 0x206c7275 size 12
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Processing st: 1, edit list 0 - media time: 1024, duration: 4272096
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] drop a frame at curr_cts: 0 @ 0
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Before avformat_find_stream_info() pos: 113542488 bytes read:110788 seeks:1 nb_streams:2
[h264 @ 0x153604cd0] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x153604cd0] Decoding VUI
[h264 @ 0x153604cd0] nal_unit_type: 8(PPS), nal_ref_idc: 3
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] demuxer injecting skip 1024 / discard 0
[aac @ 0x1536056f0] skip 1024 / discard 0 samples due to side data
[h264 @ 0x153604cd0] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x153604cd0] Decoding VUI
[h264 @ 0x153604cd0] nal_unit_type: 8(PPS), nal_ref_idc: 3
[h264 @ 0x153604cd0] nal_unit_type: 6(SEI), nal_ref_idc: 0
[h264 @ 0x153604cd0] nal_unit_type: 5(IDR), nal_ref_idc: 3
[h264 @ 0x153604cd0] Format yuv420p chosen by get_format().
[h264 @ 0x153604cd0] Reinit context to 1088x1920, pix_fmt: yuv420p
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] All info found
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] After avformat_find_stream_info() pos: 195211 bytes read:305951 seeks:2 frames:2
[2023-10-17 06:30:18.160] [debug] [Mapping :: callbackAddSubtitle] Initializing input audio...
[h264 @ 0x143604330] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x143604330] Decoding VUI
[h264 @ 0x143604330] nal_unit_type: 8(PPS), nal_ref_idc: 3
[2023-10-17 06:30:18.160] [debug] [Mapping :: callbackAddSubtitle] Initializing output video...
[h264 @ 0x143611ec0] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x143611ec0] Decoding VUI
[h264 @ 0x143611ec0] nal_unit_type: 8(PPS), nal_ref_idc: 3
[file @ 0x6000001f4000] Setting default whitelist 'file,crypto,data'
[2023-10-17 06:30:18.167] [debug] Pixel Format: YUV420P
[2023-10-17 06:30:18.167] [debug] [Mapping :: callbackAddSubtitle] Initializing output audio...
[2023-10-17 06:30:18.167] [debug] [Mapping :: callbackAddSubtitle] Initializing filters...
[2023-10-17 06:30:18.168] [debug] [Mapping :: callbackAddSubtitle] Buffer src args: video_size=1080x1920:pix_fmt=0:time_base=1/30000:pixel_aspect=1/1
detected 10 logical cores
[in @ 0x6000004ec0b0] Setting 'video_size' to value '1080x1920'
[in @ 0x6000004ec0b0] Setting 'pix_fmt' to value '0'
[in @ 0x6000004ec0b0] Setting 'time_base' to value '1/30000'
[in @ 0x6000004ec0b0] Setting 'pixel_aspect' to value '1/1'
[in @ 0x6000004ec0b0] w:1080 h:1920 pixfmt:yuv420p tb:1/30000 fr:0/1 sar:1/1
[AVFilterGraph @ 0x6000017e8000] Setting 'text' to value 'Legenda Adicionada Automaticamente Via FFMPEG e C++'
[AVFilterGraph @ 0x6000017e8000] Setting 'fontcolor' to value 'yellow'
[AVFilterGraph @ 0x6000017e8000] Setting 'bordercolor' to value 'black'
[AVFilterGraph @ 0x6000017e8000] Setting 'fontfile' to value '/Users/paulo/Downloads/roboto/Roboto-Black.ttf'
[AVFilterGraph @ 0x6000017e8000] query_formats: 3 queried, 2 merged, 0 already done, 0 delayed
[2023-10-17 06:30:18.172] [debug] [Mapping :: callbackAddSubtitle] Writing header...
[h264 @ 0x143604330] nal_unit_type: 6(SEI), nal_ref_idc: 0
[h264 @ 0x143604330] nal_unit_type: 5(IDR), nal_ref_idc: 3
[h264 @ 0x143604330] Format yuv420p chosen by get_format().
[h264 @ 0x143604330] Reinit context to 1088x1920, pix_fmt: yuv420p
[Parsed_drawtext_0 @ 0x6000004f4160] Copying data in avfilter.
[Parsed_drawtext_0 @ 0x6000004f4160] n:0 t:0.000000 text_w:424 text_h:16 x:0 y:0
[2023-10-17 06:30:18.182] [error] Error sending frame for encoding: Invalid argument
Returned Value: ERROR-SEND-FRAME-ENCODE


    


  • ffmpeg giving Error while decoding stream #0:1 : Invalid data found when processing input

    12 octobre 2023, par Aby

    I am trying to merge two video files into one using ffmpeg on Windows. The process has been proven to work over and over (with over 100 files merged together at some points) - but I have come across an input file that is causing the process to fail with the errors :

    


    _[aac @ 00000142532f74c0] channel element 1.0 is not allocated
Error while decoding stream #0:1: Invalid data found when processing input
[aac @ 00000142532f74c0] channel element 1.0 is not allocated
Error while decoding stream #0:1: Invalid data found when processing input
[aac @ 00000142532f74c0] channel element 1.0 is not allocated
.
.
.


    


    There seems to be 3 command line steps to get here, using a concats-inputs.dat file containing :

    


    file E:/..../snippet A.mp4
file E:/..../snippet B.mp4


    


    (Copies of these files can be found at https://filebin.net/77wbowvh7vbklkey/snippet_A.mp4 and https://filebin.net/77wbowvh7vbklkey/snippet_B.mp4)

    


    Step 1 :

    


    > ffmpeg-6.0-full_build/bin/ffmpeg -y -progress ".Default.mp4.progressinfo.dat" -vsync 0 -f concat -safe 0 -i "E:/...../concat-inputs.dat" -c:v copy -c:a copy -crf 0 -b:v 10M "E:/...../video.Default.mp4"


    


    with the output....

    


    built with gcc 12.2.0 (Rev10, Built by MSYS2 project)

  configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint

  libavutil      58.  2.100 / 58.  2.100
  libavcodec     60.  3.100 / 60.  3.100
  libavformat    60.  3.100 / 60.  3.100
  libavdevice    60.  1.100 / 60.  1.100
  libavfilter     9.  3.100 /  9.  3.100
  libswscale      7.  1.100 /  7.  1.100
  libswresample   4. 10.100 /  4. 10.100
  libpostproc    57.  1.100 / 57.  1.100

-vsync is deprecated. Use -fps_mode

Passing a number to -vsync is deprecated, use a string argument as described in the manual.
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001bf88ffe240] Auto-inserting h264_mp4toannexb bitstream filter
Input #0, concat, from 'E:/...../concat-inputs.dat':

  Duration: N/A, start: -0.010667, bitrate: 20382 kb/s

  Stream #0:0(und): Video: h264 (High 4:4:4 Predictive) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1280x720 [SAR 1:1 DAR 16:9], 20043 kb/s, 50 fps, 50 tbr, 12800 tbn

    Metadata:

      handler_name    : VideoHandler
      vendor_id       : [0][0][0][0]
      encoder         : Lavc60.3.100 libx264

  Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 96000 Hz, 5.1, fltp, 339 kb/s

    Metadata:
      handler_name    : SoundHandler
      vendor_id       : [0][0][0][0]

Output #0, mp4, to 'E:/...../video.Default.mp4':

  Metadata:
    encoder         : Lavf60.3.100

  Stream #0:0(und): Video: h264 (High 4:4:4 Predictive) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1280x720 [SAR 1:1 DAR 16:9], q=2-31, 10000 kb/s, 50 fps, 50 tbr, 12800 tbn

    Metadata:

      handler_name    : VideoHandler
      vendor_id       : [0][0][0][0]

      encoder         : Lavc60.3.100 libx264

  Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 96000 Hz, 5.1, fltp, 339 kb/s

    Metadata:

      handler_name    : SoundHandler
      vendor_id       : [0][0][0][0]

Stream mapping:

  Stream #0:0 -> #0:0 (copy)
  Stream #0:1 -> #0:1 (copy)

Press [q] to stop, [?] for help

frame=    0 fps=0.0 q=-1.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=N/A    
_[mov,mp4,m4a,3gp,3g2,mj2 @ 000001bf890653c0] Auto-inserting h264_mp4toannexb bitstream filter

[mp4 @ 000001bf89000580] Non-monotonous DTS in output stream 0:1; previous: 180224, current: 180192; changing to 180225. This may result in incorrect timestamps in the output file.

frame=  210 fps=0.0 q=-1.0 Lsize=   11537kB time=00:00:04.21 bitrate=22433.7kbits/s speed=41.9x

video:11417kB audio:114kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.053312%


    


    Step 2

    


    > ffmpeg-6.0-full_build/bin/ffmpeg -y -progress ".Default.mp4.progressinfo.dat" -vsync 0 -f concat -safe 0 -i "E:/...../concat-inputs.dat" -c:v copy -c:a copy -crf 0 -b:v 10M "E:/...../audio.Default.wav"


    


    which outputs...

    


    ffmpeg version 6.0-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers

  built with gcc 12.2.0 (Rev10, Built by MSYS2 project)

  configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint

  libavutil      58.  2.100 / 58.  2.100
  libavcodec     60.  3.100 / 60.  3.100
  libavformat    60.  3.100 / 60.  3.100
  libavdevice    60.  1.100 / 60.  1.100
  libavfilter     9.  3.100 /  9.  3.100
  libswscale      7.  1.100 /  7.  1.100
  libswresample   4. 10.100 /  4. 10.100
  libpostproc    57.  1.100 / 57.  1.100

-vsync is deprecated. Use -fps_mode

Passing a number to -vsync is deprecated, use a string argument as described in the manual.

[mov,mp4,m4a,3gp,3g2,mj2 @ 00000246d314e240] Auto-inserting h264_mp4toannexb bitstream filter

Input #0, concat, from 'E:/...../concat-inputs.dat':

  Duration: N/A, start: -0.010667, bitrate: 20382 kb/s

  Stream #0:0(und): Video: h264 (High 4:4:4 Predictive) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1280x720 [SAR 1:1 DAR 16:9], 20043 kb/s, 50 fps, 50 tbr, 12800 tbn

    Metadata:

      handler_name    : VideoHandler
      vendor_id       : [0][0][0][0]
      encoder         : Lavc60.3.100 libx264

  Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 96000 Hz, 5.1, fltp, 339 kb/s

    Metadata:

      handler_name    : SoundHandler
      vendor_id       : [0][0][0][0]

[out#0/wav @ 00000246d31bd240] Codec AVOption b (set bitrate (in bits/s)) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.

Output #0, wav, to 'E:/...../audio.Default.wav':

  Metadata:

    ISFT            : Lavf60.3.100

  Stream #0:0(und): Audio: aac (LC) ([255][0][0][0] / 0x00FF), 96000 Hz, 5.1, fltp, 339 kb/s

    Metadata:

      handler_name    : SoundHandler
      vendor_id       : [0][0][0][0]

Stream mapping:

  Stream #0:1 -> #0:0 (copy)

Press [q] to stop, [?] for help

size=       0kB time=00:00:00.00 bitrate=N/A speed=N/A    
_[mov,mp4,m4a,3gp,3g2,mj2 @ 00000246d3b009c0] Auto-inserting h264_mp4toannexb bitstream filter

[wav @ 00000246d3150580] Non-monotonous DTS in output stream 0:0; previous: 180224, current: 180192; changing to 180224. This may result in incorrect timestamps in the output file.

size=     114kB time=00:00:04.21 bitrate= 222.4kbits/s speed= 128x

video:0kB audio:114kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.102561%


    


    Step 3

    


    > ffmpeg-6.0-full_build/bin/ffmpeg -y -progress ".Default.mp4.progressinfo.dat" -i "E:/...../video.Default.mp4" -i "E:/...../audio.Default.wav" -crf 0 -c:v copy -c:a aac "E:/...../Default.mp4"


    


    ... which then gives the errors....

    


    ffmpeg version 6.0-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers

  built with gcc 12.2.0 (Rev10, Built by MSYS2 project)

  configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint

  libavutil      58.  2.100 / 58.  2.100
  libavcodec     60.  3.100 / 60.  3.100    
  libavformat    60.  3.100 / 60.  3.100    
  libavdevice    60.  1.100 / 60.  1.100    
  libavfilter     9.  3.100 /  9.  3.100    
  libswscale      7.  1.100 /  7.  1.100    
  libswresample   4. 10.100 /  4. 10.100    
  libpostproc    57.  1.100 / 57.  1.100

Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'E:/...../video.Default.mp4':

  Metadata:

    major_brand     : isom    
    minor_version   : 512    
    compatible_brands: isomiso2avc1mp41    
    encoder         : Lavf60.3.100

  Duration: 00:00:04.23, start: 0.000000, bitrate: 22359 kb/s

  Stream #0:0[0x1](und): Video: h264 (High 4:4:4 Predictive) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1280x720 [SAR 1:1 DAR 16:9], 22178 kb/s, 49.80 fps, 50 tbr, 12800 tbn (default)

    Metadata:

      handler_name    : VideoHandler    
      vendor_id       : [0][0][0][0]    
      encoder         : Lavc60.3.100 libx264

  Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 96000 Hz, 5.1, fltp, 221 kb/s (default)

    Metadata:

      handler_name    : SoundHandler    
      vendor_id       : [0][0][0][0]

[aac @ 000001425315e580] Multiple frames in a packet.

Input #1, wav, from 'E:/...../audio.Default.wav':

  Metadata:

    encoder         : Lavf60.3.100

  Duration: 00:00:04.22, bitrate: 221 kb/s

  Stream #1:0: Audio: aac (LC) ([255][0][0][0] / 0x00FF), 96000 Hz, 5.1, fltp, 339 kb/s

Stream mapping:

  Stream #0:0 -> #0:0 (copy)    
  Stream #0:1 -> #0:1 (aac (native) -> aac (native))

Press [q] to stop, [?] for help

Output #0, mp4, to 'E:/...../Default.mp4':

  Metadata:

    major_brand     : isom    
    minor_version   : 512    
    compatible_brands: isomiso2avc1mp41

    encoder         : Lavf60.3.100

  Stream #0:0(und): Video: h264 (High 4:4:4 Predictive) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1280x720 [SAR 1:1 DAR 16:9], q=2-31, 22178 kb/s, 49.80 fps, 50 tbr, 12800 tbn (default)

    Metadata:

      handler_name    : VideoHandler    
      vendor_id       : [0][0][0][0]    
      encoder         : Lavc60.3.100 libx264

  Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 96000 Hz, 5.1, fltp, 341 kb/s (default)

    Metadata:

      handler_name    : SoundHandler    
      vendor_id       : [0][0][0][0]    
      encoder         : Lavc60.3.100 aac

frame=    0 fps=0.0 q=-1.0 size=       0kB time=-577014:32:22.77 bitrate=  -0.0kbits/s speed=N/A    
_[aac @ 00000142532f74c0] channel element 1.0 is not allocated

Error while decoding stream #0:1: Invalid data found when processing input    
[aac @ 00000142532f74c0] channel element 1.0 is not allocated
.
.
.


    


    If I was to do this to merge snippet B with snippet B then it would work - it's something about snippet A that is causing the problem.

    


    Is there any way to get around this... what is it about snippet A that is causing a problem... and is there a way to "normalize" it so that it can be merged as part of the "set".

    


    Note, I just upgraded to ffmpeg6 after a previous version was giving the same problems - so I will also work on the deprecated messages when I can.