Recherche avancée

Médias (91)

Autres articles (82)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

Sur d’autres sites (4401)

  • Unveiling GA4 Issues : 8 Questions from a Marketer That GA4 Can’t Answer

    8 janvier 2024, par Alex

    It’s hard to believe, but Universal Analytics had a lifespan of 11 years, from its announcement in March 2012. Despite occasional criticism, this service established standards for the entire web analytics industry. Many metrics and reports became benchmarks for a whole generation of marketers. It truly was an era.

    For instance, a lot of marketers got used to starting each workday by inspecting dashboards and standard traffic reports in the Universal Analytics web interface. There were so, so many of those days. They became so accustomed to Universal Analytics that they would enter reports, manipulate numbers, and play with metrics almost on autopilot, without much thought.

    However, six months have passed since the sunset of Universal Analytics – precisely on July 1, 2023, when Google stopped processing requests for resources using the previous version of Google Analytics. The time when data about visitors and their interactions with the website were more clearly structured within the UA paradigm is now in the past. GA4 has brought a plethora of opportunities to marketers, but along with those opportunities came a series of complexities.

    GA4 issues

    Since its initial announcement in 2020, GA4 has been plagued with errors and inconsistencies. It still has poor and sometimes illogical documentation, numerous restrictions, and peculiar interface solutions. But more importantly, the barrier to entry into web analytics has significantly increased.

    If you diligently follow GA4 updates, read the documentation, and possess skills in working with data (SQL and basic statistics), you probably won’t feel any problems – you know how to set up a convenient and efficient environment for your product and marketing data. But what if you’re not that proficient ? That’s when issues arise.

    In this article, we try to address a series of straightforward questions that less experienced users – marketers, project managers, SEO specialists, and others – want answers to. They have no time to delve into the intricacies of GA4 but seek access to the fundamentals crucial for their functionality.

    Previously, in Universal Analytics, they could quickly and conveniently address their issues. Now, the situation has become, to put it mildly, more complex. We’ve identified 8 such questions for which the current version of GA4 either fails to provide answers or implies that answers would require significant enhancements. So, let’s dive into them one by one.

    Question 1 : What are the most popular traffic sources on my website ?

    Seemingly a straightforward question. What does GA4 tell us ? It responds with a question : “Which traffic source parameter are you interested in ?”

    GA4 traffic source

    Wait, what ?

    People just want to know which resources bring them the most traffic. Is that really an issue ?

    Unfortunately, yes. In GA4, there are not one, not two, but three traffic source parameters :

    1. Session source.
    2. First User Source – the source of the first session for each user.
    3. Just the source – determined at the event or conversion level.

    If you wanted to open a report and draw conclusions quickly, we have bad news for you. Before you start ranking your traffic sources by popularity, you need to do some mental work on which parameter and in what context you will look. And even when you decide, you’ll need to make a choice in the selection of standard reports : work with the User Acquisition Report or Traffic Acquisition.

    Yes, there is a difference between them : the first uses the First User Source parameter, and the second uses the session source. And you need to figure that out too.

    Question 2 : What is my conversion rate ?

    This question concerns everyone, and it should be simple, implying a straightforward answer. But no.

    GA4 conversion rate

    In GA4, there are three conversion metrics (yes, three) :

    1. Session conversion – the percentage of sessions with a conversion.
    2. User conversion – the percentage of users who completed a conversion.
    3. First-time Purchaser Conversion – the share of active users who made their first purchase.

    If the last metric doesn’t interest us much, GA4 users can still choose something from the remaining two. But what’s next ? Which parameters to use for comparison ? Session source or user source ? What if you want to see the conversion rate for a specific event ? And how do you do this in analyses rather than in standard reports ?

    In the end, instead of an answer to a simple question, marketers get a bunch of new questions.

    Question 3. Can I trust user and session metrics ?

    Unfortunately, no. This may boggle the mind of those not well-versed in the mechanics of calculating user and session metrics, but it’s the plain truth : the numbers in GA4 and those in reality may and will differ.

    GA4 confidence levels

    The reason is that GA4 uses the HyperLogLog++ statistical algorithm to count unique values. Without delving into details, it’s a mechanism for approximate estimation of a metric with a certain level of error.

    This error level is quite well-documented. For instance, for the Total Users metric, the error level is 1.63% (for a 95% confidence interval). In simple terms, this means that 100,000 users in the GA4 interface equate to 100,000 1.63% in reality.

    Furthermore – but this is no surprise to anyone – GA4 samples data. This means that with too large a sample size or when using a large number of parameters, the application will assess your metrics based on a partial sample – let’s say 5, 10, or 30% of the entire population.

    It’s a reasonable assumption, but it can (and probably will) surprise marketers – the metrics will deviate from reality. All end-users can do (excluding delving into raw data methodologies) is to take this error level into account in their conclusions.

    Question 4. How do I calculate First Click attribution ?

    You can’t. Unfortunately, as of late, GA4 offers only three attribution models available in the Attribution tab : Last Click, Last Click For Google Ads, and Data Driven. First Click attribution is essential for understanding where and when demand is generated. In the previous version of Google Analytics (and until recently, in the current one), users could quickly apply First Click and other attribution models, compare them, and gain insights. Now, this capability is gone.

    GA4 attribution model

    Certainly, you can look at the conversion distribution considering the First User Source parameter – this will be some proxy for First Click attribution. However, comparing it with others in the Model Comparison tab won’t be possible. In the context of the GA4 interface, it makes sense to forget about non-standard attribution models.

    Question 5. How do I account for intra-session traffic ?

    Intra-session traffic essentially refers to a change in traffic sources within a session. Imagine a scenario where a user comes to your site organically from Google and, within a minute, comes from an email campaign. In the previous version of Google Analytics, a new session with the traffic source “e-mail” would be created in such a case. But now, the situation has changed.

    A session now only ends in the case of a timeout – say, 30 minutes without interaction. This means a session will always have a source from which it started. If a user changes the source within a session (clicks on an ad, from email campaigns, and so on), you won’t know anything about it until they convert. This is a significant blow to intra-session traffic since their contribution to traffic remains virtually unnoticed. 

    Question 6. How can I account for users who have not consented to the use of third-party cookies ?

    You can’t. Google Consent Mode settings imply several options when a user rejects the use of 3rd party cookies. In GA4 and BigQuery, depersonalized cookieless pings will be sent. These pings do not contain specific client_id, session_id, or other custom dimensions. As a result, you won’t be able to consider them as users or link the actions of such users together.

    Question 7. How can I compare data in explorations with the previous year ?

    The maximum data retention period for a free GA4 account is 14 months. This means that if the date range is wider, you can only use standard reports. You won’t be able to compare or view cohorts or funnels for periods more than 14 months ago. This makes the product functionality less rich because various report formats in explorations are very convenient for comparing specific metrics in easily digestible reports.

    GA4 data retention

    Of course, you always have the option to connect BigQuery and store raw data without limitations, but this process usually requires the involvement of an advanced analyst. And precisely this option is unavailable to most marketers in small teams.

    Question 8. Is the data for yesterday accurate ?

    Unknown. Google declares that data processing in GA4 takes up to 48 hours. And although this process is faster, most users still have room for frustration. And they can be understood.

    Data processing time in GA4

    What does “data processing takes 24-48 hours” mean ? When will the data in reports be complete ? For yesterday ? Or the day before yesterday ? Or for all days that were more than two days ago ? Unclear. What should marketers tell their managers when they were asked if all the data is in this report ? Well, probably all of it… or maybe not… Let’s wait for 48 hours…

    Undoubtedly, computational resources and time are needed for data preprocessing and aggregation. It’s okay that data for today will not be up-to-date. And probably not for yesterday either. But people just want to know when they can trust their data. Are they asking for too much : just a note that this report contains all the data sent and processed by Google Analytics ?

    What should you do ?

    Credit should be given to the Google team – they have done a lot to enable users to answer these questions in one form or another. For example, you can use data streaming in BigQuery and work with raw data. The entry threshold for this functionality has been significantly lowered. In fact, if you are dissatisfied with the GA4 interface, you can organize your export to BigQuery and create your own reports without (almost) any restrictions.

    Another strong option is the widespread launch of GTM Server Side. This allows you to quite freely modify the event model and essentially enrich each hit with various parameters, doing this in a first-party context. This, of course, reduces the harmful impact of most of the limitations described in this text.

    But this is not a solution.

    The users in question – marketers, managers, developers – they do not want or do not have the time for a deep dive into the issue. And they want simple answers to simple (it seemed) questions. And for now, unfortunately, GA4 is more of a professional tool for analysts than a convenient instrument for generating insights for not very advanced users.

    Why is this such a serious issue ?

    The thing is – and this is crucial – over the past 10 years, Google has managed to create a sort of GA-bubble for marketers. Many of them have become so accustomed to Google Analytics that when faced with another issue, they don’t venture to explore alternative solutions but attempt to solve it on their own. And almost always, this turns out to be expensive and inconvenient.

    However, with the latest updates to GA4, it is becoming increasingly evident that this application is struggling to address even the most basic questions from users. And these questions are not fantastically complex. Much of what was described in this article is not an unsolvable mystery and is successfully addressed by other analytics services.

    Let’s try to answer some of the questions described from the perspective of Matomo.

    Question 1 : What are the most popular traffic sources ? [Solved]

    In the Acquisition panel, you will find at least three easily identifiable reports – for traffic channels (All Channels), sources (Websites), and campaigns (Campaigns). 

    Channel Type Table

    With these, you can quickly and easily answer the question about the most popular traffic sources, and if needed, delve into more detailed information, such as landing pages.

    Question 2 : What is my conversion rate ? [Solved]

    Under Goals in Matomo, you’ll easily find the overall conversion rate for your site. Below that you’ll have access to the conversion rate of each goal you’ve set in your Matomo instance.

    Question 3 : Can I trust user and session metrics ? [Solved]

    Yes. With Matomo, you’re guaranteed 100% accurate data. Matomo does not apply sampling, does not employ specific statistical algorithms, or any analogs of threshold values. Yes, it is possible, and it’s perfectly normal. If you see a metric in the visits or users field, it accurately represents reality by 100%.

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    Question 4 : How do I calculate First Click attribution ? [Solved]

    You can do this in the same section where the other 5 attribution models, available in Matomo, are calculated – in the Multi Attribution section.

    Multi Attribution feature

    You can choose a specific conversion and, in a few clicks, calculate and compare up to 3 marketing attribution models. This means you don’t have to spend several days digging through documentation trying to understand how a particular model is calculated. Have a question – get an answer.

    Question 5 : How do I account for intra-session traffic ? [Solved]

    Matomo creates a new visit when a user changes a campaign. This means that you will accurately capture all relevant traffic if it is adequately tagged. No campaigns will be lost within a visit, as they will have a new utm_campaign parameter.

    This is a crucial point because when the Referrer changes, a new visit is not created, but the key lies in something else – accounting for all available traffic becomes your responsibility and depends on how you tag it.

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    Question 6 : How can I account for users who have not consented to the use of third-party cookies ? [Solved]

    Google Analytics requires users to accept a cookie consent banner with “analytics_storage=granted” to track them. If users reject cookie consent banners, however, then Google Analytics can’t track these visitors at all. They simply won’t show up in your traffic reports. 

    Matomo doesn’t require cookie consent banners (apart from in the United Kingdom and Germany) and can therefore continue to track visitors even after they have rejected a cookie consent screen. This is achieved through a config_id variable (the user identifier equivalent which is updating once a day). 

    Matomo doesn't need cookie consent, so you see a complete view of your traffic

    This means that virtually all of your website traffic will be tracked regardless of whether users accept a cookie consent banner or not.

    Question 7 : How can I compare data in explorations with the previous year ? [Solved]

    There is no limitation on data retention for your aggregated reports in Matomo. The essence of Matomo experience lies in the reporting data, and consequently, retaining reports indefinitely is a viable option. So you can compare data for any timeframe. 7

    Date Comparison Selector
  • Screenrecorder application output video resolution issues [closed]

    23 juin 2022, par JessieK

    Using Github code for ScreenRecorder on Linux
Everything works fine, besides the resolution of output video.
Tried to play with setting, quality has significantly improved, but still no way to change resolution.
I need to get output video with the same size as input video

    


    using namespace std;

    /* initialize the resources*/
    ScreenRecorder::ScreenRecorder()
    {
    
        av_register_all();
        avcodec_register_all();
        avdevice_register_all();
        cout<<"\nall required functions are registered successfully";
    }
    
    /* uninitialize the resources */
    ScreenRecorder::~ScreenRecorder()
    {
    
        avformat_close_input(&pAVFormatContext);
        if( !pAVFormatContext )
        {
            cout<<"\nfile closed sucessfully";
        }
        else
        {
            cout<<"\nunable to close the file";
            exit(1);
        }
    
        avformat_free_context(pAVFormatContext);
        if( !pAVFormatContext )
        {
            cout<<"\navformat free successfully";
        }
        else
        {
            cout<<"\nunable to free avformat context";
            exit(1);
        }
    
    }
    
    /* function to capture and store data in frames by allocating required memory and auto deallocating the memory.   */
    int ScreenRecorder::CaptureVideoFrames()
    {
        int flag;
        int frameFinished;//when you decode a single packet, you still don't have information enough to have a frame [depending on the type of codec, some of them //you do], when you decode a GROUP of packets that represents a frame, then you have a picture! that's why frameFinished will let //you know you decoded enough to have a frame.
    
        int frame_index = 0;
        value = 0;
    
        pAVPacket = (AVPacket *)av_malloc(sizeof(AVPacket));
        av_init_packet(pAVPacket);
    
        pAVFrame = av_frame_alloc();
        if( !pAVFrame )
        {
         cout<<"\nunable to release the avframe resources";
         exit(1);
        }
    
        outFrame = av_frame_alloc();//Allocate an AVFrame and set its fields to default values.
        if( !outFrame )
        {
         cout<<"\nunable to release the avframe resources for outframe";
         exit(1);
        }
    
        int video_outbuf_size;
        int nbytes = av_image_get_buffer_size(outAVCodecContext->pix_fmt,outAVCodecContext->width,outAVCodecContext->height,32);
        uint8_t *video_outbuf = (uint8_t*)av_malloc(nbytes);
        if( video_outbuf == NULL )
        {
            cout<<"\nunable to allocate memory";
            exit(1);
        }
    
        // Setup the data pointers and linesizes based on the specified image parameters and the provided array.
        value = av_image_fill_arrays( outFrame->data, outFrame->linesize, video_outbuf , AV_PIX_FMT_YUV420P, outAVCodecContext->width,outAVCodecContext->height,1 ); // returns : the size in bytes required for src
        if(value < 0)
        {
            cout<<"\nerror in filling image array";
        }
    
        SwsContext* swsCtx_ ;
    
        // Allocate and return swsContext.
        // a pointer to an allocated context, or NULL in case of error
        // Deprecated : Use sws_getCachedContext() instead.
        swsCtx_ = sws_getContext(pAVCodecContext->width,
                            pAVCodecContext->height,
                            pAVCodecContext->pix_fmt,
                            outAVCodecContext->width,
                    outAVCodecContext->height,
                            outAVCodecContext->pix_fmt,
                            SWS_BICUBIC, NULL, NULL, NULL);
    
    
    int ii = 0;
    int no_frames = 100;
    cout<<"\nenter No. of frames to capture : ";
    cin>>no_frames;
    
        AVPacket outPacket;
        int j = 0;
    
        int got_picture;
    
        while( av_read_frame( pAVFormatContext , pAVPacket ) >= 0 )
        {
        if( ii++ == no_frames )break;
            if(pAVPacket->stream_index == VideoStreamIndx)
            {
                value = avcodec_decode_video2( pAVCodecContext , pAVFrame , &frameFinished , pAVPacket );
                if( value < 0)
                {
                    cout<<"unable to decode video";
                }
    
                if(frameFinished)// Frame successfully decoded :)
                {
                    sws_scale(swsCtx_, pAVFrame->data, pAVFrame->linesize,0, pAVCodecContext->height, outFrame->data,outFrame->linesize);
                    av_init_packet(&outPacket);
                    outPacket.data = NULL;    // packet data will be allocated by the encoder
                    outPacket.size = 0;
    
                    avcodec_encode_video2(outAVCodecContext , &outPacket ,outFrame , &got_picture);
    
                    if(got_picture)
                    {
                        if(outPacket.pts != AV_NOPTS_VALUE)
                            outPacket.pts = av_rescale_q(outPacket.pts, video_st->codec->time_base, video_st->time_base);
                        if(outPacket.dts != AV_NOPTS_VALUE)
                            outPacket.dts = av_rescale_q(outPacket.dts, video_st->codec->time_base, video_st->time_base);
                    
                        printf("Write frame %3d (size= %2d)\n", j++, outPacket.size/1000);
                        if(av_write_frame(outAVFormatContext , &outPacket) != 0)
                        {
                            cout<<"\nerror in writing video frame";
                        }
    
                    av_packet_unref(&outPacket);
                    } // got_picture
    
                av_packet_unref(&outPacket);
                } // frameFinished
    
            }
        }// End of while-loop


    


    One part of two parts is above...Actually original app seem to record video of same size as does my application, but still it has not any use

    



    


    Second part of the code

    


    av_free(video_outbuf);

}

/* establishing the connection between camera or screen through its respective folder */
int ScreenRecorder::openCamera()
{

    value = 0;
    options = NULL;
    pAVFormatContext = NULL;

    pAVFormatContext = avformat_alloc_context();//Allocate an AVFormatContext.
/*

X11 video input device.
To enable this input device during configuration you need libxcb installed on your system. It will be automatically detected during configuration.
This device allows one to capture a region of an X11 display. 
refer : https://www.ffmpeg.org/ffmpeg-devices.html#x11grab
*/
    /* current below is for screen recording. to connect with camera use v4l2 as a input parameter for av_find_input_format */ 
    pAVInputFormat = av_find_input_format("x11grab");
    value = avformat_open_input(&pAVFormatContext, ":0.0+10,250", pAVInputFormat, NULL);
    if(value != 0)
    {
       cout<<"\nerror in opening input device";
       exit(1);
    }

    /* set frame per second */
    value = av_dict_set( &options,"framerate","30",0 );
    if(value < 0)
    {
      cout<<"\nerror in setting dictionary value";
       exit(1);
    }

    value = av_dict_set( &options, "preset", "medium", 0 );
    if(value < 0)
    {
      cout<<"\nerror in setting preset values";
      exit(1);
    }

//  value = avformat_find_stream_info(pAVFormatContext,NULL);
    if(value < 0)
    {
      cout<<"\nunable to find the stream information";
      exit(1);
    }

    VideoStreamIndx = -1;

    /* find the first video stream index . Also there is an API available to do the below operations */
    for(int i = 0; i < pAVFormatContext->nb_streams; i++ ) // find video stream posistion/index.
    {
      if( pAVFormatContext->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO )
      {
         VideoStreamIndx = i;
         break;
      }

    } 

    if( VideoStreamIndx == -1)
    {
      cout<<"\nunable to find the video stream index. (-1)";
      exit(1);
    }

    // assign pAVFormatContext to VideoStreamIndx
    pAVCodecContext = pAVFormatContext->streams[VideoStreamIndx]->codec;

    pAVCodec = avcodec_find_decoder(pAVCodecContext->codec_id);
    if( pAVCodec == NULL )
    {
      cout<<"\nunable to find the decoder";
      exit(1);
    }

    value = avcodec_open2(pAVCodecContext , pAVCodec , NULL);//Initialize the AVCodecContext to use the given AVCodec.
    if( value < 0 )
    {
      cout<<"\nunable to open the av codec";
      exit(1);
    }
}

/* initialize the video output file and its properties  */
int ScreenRecorder::init_outputfile()
{
    outAVFormatContext = NULL;
    value = 0;
    output_file = "../media/output.mp4";

    avformat_alloc_output_context2(&outAVFormatContext, NULL, NULL, output_file);
    if (!outAVFormatContext)
    {
        cout<<"\nerror in allocating av format output context";
        exit(1);
    }

/* Returns the output format in the list of registered output formats which best matches the provided parameters, or returns NULL if there is no match. */
    output_format = av_guess_format(NULL, output_file ,NULL);
    if( !output_format )
    {
     cout<<"\nerror in guessing the video format. try with correct format";
     exit(1);
    }

    video_st = avformat_new_stream(outAVFormatContext ,NULL);
    if( !video_st )
    {
        cout<<"\nerror in creating a av format new stream";
        exit(1);
    }

    outAVCodecContext = avcodec_alloc_context3(outAVCodec);
    if( !outAVCodecContext )
    {
        cout<<"\nerror in allocating the codec contexts";
        exit(1);
    }

    /* set property of the video file */
    outAVCodecContext = video_st->codec;
    outAVCodecContext->codec_id = AV_CODEC_ID_MPEG4;// AV_CODEC_ID_MPEG4; // AV_CODEC_ID_H264 // AV_CODEC_ID_MPEG1VIDEO
    outAVCodecContext->codec_type = AVMEDIA_TYPE_VIDEO;
    outAVCodecContext->pix_fmt  = AV_PIX_FMT_YUV420P;
    outAVCodecContext->bit_rate = 2500000; // 2500000
    outAVCodecContext->width = 1920;
    outAVCodecContext->height = 1080;
    outAVCodecContext->gop_size = 3;
    outAVCodecContext->max_b_frames = 2;
    outAVCodecContext->time_base.num = 1;
    outAVCodecContext->time_base.den = 30; // 15fps

    {
     av_opt_set(outAVCodecContext->priv_data, "preset", "slow", 0);
    }

    outAVCodec = avcodec_find_encoder(AV_CODEC_ID_MPEG4);
    if( !outAVCodec )
    {
     cout<<"\nerror in finding the av codecs. try again with correct codec";
    exit(1);
    }

    /* Some container formats (like MP4) require global headers to be present
       Mark the encoder so that it behaves accordingly. */

    if ( outAVFormatContext->oformat->flags & AVFMT_GLOBALHEADER)
    {
        outAVCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
    }

    value = avcodec_open2(outAVCodecContext, outAVCodec, NULL);
    if( value < 0)
    {
        cout<<"\nerror in opening the avcodec";
        exit(1);
    }

    /* create empty video file */
    if ( !(outAVFormatContext->flags & AVFMT_NOFILE) )
    {
     if( avio_open2(&outAVFormatContext->pb , output_file , AVIO_FLAG_WRITE ,NULL, NULL) < 0 )
     {
      cout<<"\nerror in creating the video file";
      exit(1);
     }
    }

    if(!outAVFormatContext->nb_streams)
    {
        cout<<"\noutput file dose not contain any stream";
        exit(1);
    }

    /* imp: mp4 container or some advanced container file required header information*/
    value = avformat_write_header(outAVFormatContext , &options);
    if(value < 0)
    {
        cout<<"\nerror in writing the header context";
        exit(1);
    }


    cout<<"\n\nOutput file information :\n\n";
    av_dump_format(outAVFormatContext , 0 ,output_file ,1);


    


    Github link https://github.com/abdullahfarwees/screen-recorder-ffmpeg-cpp

    


  • Combining two live RTMP streams into another RTMP stream, synchronization issues (with FFMPEG)

    12 juin 2020, par Evk

    I'm trying to combine (side by side) two live video streams coming over RTMP, using the following ffmpeg command :

    



    ffmpeg -i "rtmp://first" -i "rtmp://first" -filter_complex "[0v][1v]xstack=inputs=2:layout=0_0|1920_0[stacked]" -map "[stacked]" -preset ultrafast -vcodec libx264 -tune zerolatency -an -f flv output.flv


    



    In this example I actually use the same input stream two times, because the issue is more visible this way. And the issue is in the output two streams are out of sync by about 2-3 seconds. That is - I expect (since I have two identical inputs) to have exactly the same left and right sides in the output. Instead - left side is behind right side by 2-3 seconds.

    



    What I believe is happening is ffmpeg connects to inputs in order (I see this in output log) and connection to each one takes 2-3 seconds (maybe it waits for I-frame, those streams have I-frame interval of 3 seconds). Then probably, it buffers frames received from first (already connected) input, while connecting to the second one. When second one is connected and frames from both inputs are ready to be put through the filter - first input buffer already contains 2-3 seconds of video - and result is out of sync.

    



    Again, that's just my assumptions. So, how can achieve my goal ? What I basically want is for ffmpeg to discard all "old" frames received before BOTH inputs are connected OR somehow put "empty" (black ?) frames for second input, while waiting for that second input to become available. I tried play with various flags, with PTS (setpts filter), but to no avail.