Recherche avancée

Médias (2)

Mot : - Tags -/kml

Autres articles (25)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (4235)

  • Screen Recording with FFmpeg-Lib with c++

    27 janvier 2020, par Baschdel

    I’m trying to record the whole desktop stream with FFmpeg on Windows.
    I found a working example here. The Problem is that some og the functions depricated. So I tried to replace them with the updated ones.

    But there are some slight problems. The error "has triggered a breakpoint." occurse and also "not able to read the location."
    The bigger problem is that I don’t know if this is the right way to do this..

    My code looks like this :

    using namespace std;

    /* initialize the resources*/
    Recorder::Recorder()
    {

       av_register_all();
       avcodec_register_all();
       avdevice_register_all();
       cout<<"\nall required functions are registered successfully";
    }

    /* uninitialize the resources */
    Recorder::~Recorder()
    {

       avformat_close_input(&pAVFormatContext);
       if( !pAVFormatContext )
       {
           cout<<"\nfile closed sucessfully";
       }
       else
       {
           cout<<"\nunable to close the file";
           exit(1);
       }

       avformat_free_context(pAVFormatContext);
       if( !pAVFormatContext )
       {
           cout<<"\navformat free successfully";
       }
       else
       {
           cout<<"\nunable to free avformat context";
           exit(1);
       }

    }

    /* establishing the connection between camera or screen through its respective folder */
    int Recorder::openCamera()
    {

       value = 0;
       options = NULL;
       pAVFormatContext = NULL;

       pAVFormatContext = avformat_alloc_context();//Allocate an AVFormatContext.

       openScreen(pAVFormatContext);

       /* set frame per second */
       value = av_dict_set( &options,"framerate","30",0 );
       if(value < 0)
       {
         cout<<"\nerror in setting dictionary value";
          exit(1);
       }

       value = av_dict_set( &options, "preset", "medium", 0 );
       if(value < 0)
       {
         cout<<"\nerror in setting preset values";
         exit(1);
       }

    //  value = avformat_find_stream_info(pAVFormatContext,NULL);
       if(value < 0)
       {
         cout<<"\nunable to find the stream information";
         exit(1);
       }

       VideoStreamIndx = -1;

       /* find the first video stream index . Also there is an API available to do the below operations */
       for(int i = 0; i < pAVFormatContext->nb_streams; i++ ) // find video stream posistion/index.
       {
         if( pAVFormatContext->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO )
         {
            VideoStreamIndx = i;
            break;
         }

       }

       if( VideoStreamIndx == -1)
       {
         cout<<"\nunable to find the video stream index. (-1)";
         exit(1);
       }

       // assign pAVFormatContext to VideoStreamIndx
       pAVCodecContext = pAVFormatContext->streams[VideoStreamIndx]->codec;

       pAVCodec = avcodec_find_decoder(pAVCodecContext->codec_id);
       if( pAVCodec == NULL )
       {
         cout<<"\nunable to find the decoder";
         exit(1);
       }

       value = avcodec_open2(pAVCodecContext , pAVCodec , NULL);//Initialize the AVCodecContext to use the given AVCodec.
       if( value < 0 )
       {
         cout<<"\nunable to open the av codec";
         exit(1);
       }
    }

    /* initialize the video output file and its properties  */
    int Recorder::init_outputfile()
    {
       outAVFormatContext = NULL;
       value = 0;
       output_file = "output.mp4";

       avformat_alloc_output_context2(&outAVFormatContext, NULL, NULL, output_file);
       if (!outAVFormatContext)
       {
           cout<<"\nerror in allocating av format output context";
           exit(1);
       }

    /* Returns the output format in the list of registered output formats which best matches the provided parameters, or returns NULL if there is no match. */
       output_format = av_guess_format(NULL, output_file ,NULL);
       if( !output_format )
       {
        cout<<"\nerror in guessing the video format. try with correct format";
        exit(1);
       }

       video_st = avformat_new_stream(outAVFormatContext ,NULL);
       if( !video_st )
       {
           cout<<"\nerror in creating a av format new stream";
           exit(1);
       }

       if (codec_id == AV_CODEC_ID_H264)
       {
           av_opt_set(outAVCodecContext->priv_data, "preset", "slow", 0);
       }

       outAVCodec = avcodec_find_encoder(AV_CODEC_ID_MPEG4);
       if (!outAVCodec)
       {
           cout << "\nerror in finding the av codecs. try again with correct codec";
           exit(1);
       }

       outAVCodecContext = avcodec_alloc_context3(outAVCodec);
       if( !outAVCodecContext )
       {
           cout<<"\nerror in allocating the codec contexts";
           exit(1);
       }

       /* set property of the video file */
       outAVCodecContext = video_st->codec;
       outAVCodecContext->codec_id = AV_CODEC_ID_MPEG4;// AV_CODEC_ID_MPEG4; // AV_CODEC_ID_H264 // AV_CODEC_ID_MPEG1VIDEO
       outAVCodecContext->codec_type = AVMEDIA_TYPE_VIDEO;
       outAVCodecContext->pix_fmt  = AV_PIX_FMT_YUV420P;
       outAVCodecContext->bit_rate = 400000; // 2500000
       outAVCodecContext->width = 1920;
       outAVCodecContext->height = 1080;
       outAVCodecContext->gop_size = 3;
       outAVCodecContext->max_b_frames = 2;
       outAVCodecContext->time_base.num = 1;
       outAVCodecContext->time_base.den = 30; //15fps


       /* Some container formats (like MP4) require global headers to be present
          Mark the encoder so that it behaves accordingly. */

       if ( outAVFormatContext->oformat->flags & AVFMT_GLOBALHEADER)
       {
           outAVCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
       }

       value = avcodec_open2(outAVCodecContext, outAVCodec, NULL);
       if( value < 0)
       {
           cout<<"\nerror in opening the avcodec";
           exit(1);
       }

       /* create empty video file */
       if ( !(outAVFormatContext->flags & AVFMT_NOFILE) )
       {
        if( avio_open2(&outAVFormatContext->pb , output_file , AVIO_FLAG_WRITE ,NULL, NULL) < 0 )
        {
         cout<<"\nerror in creating the video file";
         exit(1);
        }
       }

       if(!outAVFormatContext->nb_streams)
       {
           cout<<"\noutput file dose not contain any stream";
           exit(1);
       }

       /* imp: mp4 container or some advanced container file required header information*/
       value = avformat_write_header(outAVFormatContext , &options);
       if(value < 0)
       {
           cout<<"\nerror in writing the header context";
           exit(1);
       }

       /*
       // uncomment here to view the complete video file informations
       cout<<"\n\nOutput file information :\n\n";
       av_dump_format(outAVFormatContext , 0 ,output_file ,1);
       */
    }

    int Recorder::stop() {
       threading = false;

       demux->join();
       rescale->join();
       mux->join();

       return 0;
    }

    int Recorder::start() {
       initVideoThreads();
       return 0;
    }

    int Recorder::initVideoThreads() {
       demux = new thread(&Recorder::demuxVideoStream, this, pAVCodecContext, pAVFormatContext, VideoStreamIndx);

       rescale = new thread(&Recorder::rescaleVideoStream, this, pAVCodecContext, outAVCodecContext);

       demux = new thread(&Recorder::encodeVideoStream, this, outAVCodecContext);
       return 0;
    }

    void Recorder::demuxVideoStream(AVCodecContext* codecContext, AVFormatContext* formatContext, int streamIndex)
    {
       // init packet
       AVPacket* packet = (AVPacket*)av_malloc(sizeof(AVPacket));
       av_init_packet(packet);

       int ctr = 0;

       while (threading)
       {
           if (av_read_frame(formatContext, packet) < 0) {
               exit(1);
           }

           if (packet->stream_index == streamIndex)
           {
               int return_value; // = 0;
               ctr++;

               do
               {
                   return_value = avcodec_send_packet(codecContext, packet);
               } while (return_value == AVERROR(EAGAIN) && threading);

               //int i = avcodec_send_packet(codecContext, packet);
               if (return_value < 0 && threading) { // call Decoder
                   cout << "unable to decode video";
                   exit(1);
               }
           }
       }

       avcodec_send_packet(codecContext, NULL); // flush decoder

       // return 0;
    }

    void Recorder::rescaleVideoStream(AVCodecContext* inCodecContext, AVCodecContext* outCodecContext)
    {
       bool closing = false;
       AVFrame* inFrame = av_frame_alloc();
       if (!inFrame)
       {
           cout << "\nunable to release the avframe resources";
           exit(1);
       }

       int nbytes = av_image_get_buffer_size(outAVCodecContext->pix_fmt, outAVCodecContext->width, outAVCodecContext->height, 32);
       uint8_t* video_outbuf = (uint8_t*)av_malloc(nbytes);
       if (video_outbuf == NULL)
       {
           cout << "\nunable to allocate memory";
           exit(1);
       }

       AVFrame* outFrame = av_frame_alloc();//Allocate an AVFrame and set its fields to default values.
       if (!outFrame)
       {
           cout << "\nunable to release the avframe resources for outframe";
           exit(1);
       }

       // Setup the data pointers and linesizes based on the specified image parameters and the provided array.
       int value = av_image_fill_arrays(outFrame->data, outFrame->linesize, video_outbuf, AV_PIX_FMT_YUV420P, outAVCodecContext->width, outAVCodecContext->height, 1); // returns : the size in bytes required for src
       if (value < 0)
       {
           cout << "\nerror in filling image array";
       }
       int ctr = 0;

       while (threading || !closing) {
           int value = avcodec_receive_frame(inCodecContext, inFrame);
           if (value == 0) {
               ctr++;
               SwsContext* swsCtx_ = sws_getContext(inCodecContext->width,
                   inCodecContext->height,
                   inCodecContext->pix_fmt,
                   outAVCodecContext->width,
                   outAVCodecContext->height,
                   outAVCodecContext->pix_fmt,
                   SWS_BICUBIC, NULL, NULL, NULL);
               sws_scale(swsCtx_, inFrame->data, inFrame->linesize, 0, inCodecContext->height, outFrame->data, outFrame->linesize);


               int return_value;
               do
               {
                   return_value = avcodec_send_frame(outCodecContext, outFrame);
               } while (return_value == AVERROR(EAGAIN) && threading);
           }
           closing = (value == AVERROR_EOF);
       }
       avcodec_send_frame(outCodecContext, NULL);


       // av_free(video_outbuf);

       // return 0;
    }

    void Recorder::encodeVideoStream(AVCodecContext* codecContext)
    {
       bool closing = true;
       AVPacket* packet = (AVPacket*)av_malloc(sizeof(AVPacket));
       av_init_packet(packet);

       int ctr = 0;

       while (threading || !closing) {
           packet->data = NULL;    // packet data will be allocated by the encoder
           packet->size = 0;
           ctr++;
           int value = avcodec_receive_packet(codecContext, packet);
           if (value == 0) {
               if (packet->pts != AV_NOPTS_VALUE)
                   packet->pts = av_rescale_q(packet->pts, video_st->codec->time_base, video_st->time_base);
               if (packet->dts != AV_NOPTS_VALUE)
                   packet->dts = av_rescale_q(packet->dts, video_st->codec->time_base, video_st->time_base);

               //printf("Write frame %3d (size= %2d)\n", j++, packet->size / 1000);
               if (av_write_frame(outAVFormatContext, packet) != 0)
               {
                   cout << "\nerror in writing video frame";
               }
           }

           closing = (value == AVERROR_EOF);
       }

       value = av_write_trailer(outAVFormatContext);
       if (value < 0)
       {
           cout << "\nerror in writing av trailer";
           exit(1);
       }

       // av_free(packet);

       // return 0;
    }


    int Recorder::openScreen(AVFormatContext* pFormatCtx) {
       /*

       X11 video input device.
       To enable this input device during configuration you need libxcb installed on your system. It will be automatically detected during configuration.
       This device allows one to capture a region of an X11 display.
       refer : https://www.ffmpeg.org/ffmpeg-devices.html#x11grab
       */
       /* current below is for screen recording. to connect with camera use v4l2 as a input parameter for av_find_input_format */
       pAVInputFormat = av_find_input_format("gdigrab");
       //value = avformat_open_input(&pAVFormatContext, ":0.0+10,250", pAVInputFormat, NULL);

       value = avformat_open_input(&pAVFormatContext, "desktop", pAVInputFormat, NULL);
       if (value != 0)
       {
           cout << "\nerror in opening input device";
           exit(1);
       }
       return 0;
    }
  • FFMPEG Audio/video out of sync after cutting and concatonating even after transcoding

    4 mai 2020, par Ham789

    I am attempting to take cuts from a set of videos and concatonate them together with the concat demuxer.

    



    However, the audio is out of sync of the video in the output. The audio seems to drift further out of sync as the video progresses. Interestingly, if I click to seek another time in the video with the progress bar on the player, the audio becomes synced up with the video but then gradually drifts out of sync again. Seeking to a new time in the player seems to reset the audio/video. It is like they are being played back at different rates or something. I get this behaviour in both Quicktime and VLC players.

    



    For each video, I decode it, trim a clip from it and then encode it to 4k resolution at 25 fps with its audio :

    



    ffmpeg -ss 0.5 -t 0.5 -i input_video1.mp4 -r 25 -vf scale=3840:2160 output_video1.mp4

    



    I then take each of these videos and concatonate them together with the concat demuxer :

    



    ffmpeg -f concat -safe 0 -i cut_videos.txt -c copy -y output.mp4

    



    I am taking short cuts of each video (approximately 0.5s)

    



    I am using Python's subprocess to automate the cutting and concatonating of the videos.

    



    I am not sure if this happens because of the trimming or concatenation steps but when I play back the intermediate cut video files (output_video1.mp4 in the above command), there seems to be some silence before the audio comes in at the start of the video.

    



    When I concatonate the videos, I sometimes get a lot of these warnings however the audio still becomes out of sync even when I do not get them :

    



    [mp4 @ 0000021a252ce080] Non-monotonous DTS in output stream 0:1; previous: 51792, current: 50009; changing to 51793. This may result in incorrect timestamps in the output file.

    



    From this post, it seems to be a problem with cutting the videos and their timestamps. The solution proposed in the post is to decode, cut and then encode the video however I am already doing that.

    



    How can I ensure the audio and video are in sync ? Am I transcoding incorrectly ? This seems to be the only solution I can find online however it does not seem to work.

    



    UPDATE :

    



    I took inspiration from this post and seperated the audio and video from output_video1.mp4 using :

    



    ffmpeg -i output_video1.mp4 -acodec copy -vn video.mp4

    



    and

    



    ffmpeg -i output_video1.mp4 -vcodec copy -an audio.mp4

    



    I then compared the durations of video.mp4 and audio.mp4 and got 0.57s and 0.52s respectively. Since the video is longer, this explains why there is a period of silence in the videos. The post then suggests transcoding is the solution however as you can see from the code above that does not work for me.

    



    Sample Output Log for the Trim Command

    



      built with Apple LLVM version 10.0.0 (clang-1000.11.45.5)
  configuration: --prefix=/usr/local/Cellar/ffmpeg/4.2.2 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags='-I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.0.1.jdk/Contents/Home/include -I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.0.1.jdk/Contents/Home/include/darwin' --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr --enable-videotoolbox --disable-libjack --disable-indev=jack
  libavutil      56. 31.100 / 56. 31.100
  libavcodec     58. 54.100 / 58. 54.100
  libavformat    58. 29.100 / 58. 29.100
  libavdevice    58.  8.100 / 58.  8.100
  libavfilter     7. 57.100 /  7. 57.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  5.100 /  5.  5.100
  libswresample   3.  5.100 /  3.  5.100
  libpostproc    55.  5.100 / 55.  5.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input_video1.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.29.100
  Duration: 00:00:04.06, start: 0.000000, bitrate: 14266 kb/s
    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 3840x2160, 14268 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)
    Metadata:
      handler_name    : Core Media Video
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 94 kb/s (default)
    Metadata:
      handler_name    : Core Media Audio
File 'output_video1.mp4' already exists. Overwrite ? [y/N] y
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
  Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
[libx264 @ 0x7fcae4001e00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x7fcae4001e00] profile High, level 5.1
[libx264 @ 0x7fcae4001e00] 264 - core 155 r2917 0a84d98 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'output_video1.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.29.100
    Stream #0:0(und): Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 3840x2160, q=-1--1, 25 fps, 12800 tbn, 25 tbc (default)
    Metadata:
      handler_name    : Core Media Video
      encoder         : Lavc58.54.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 69 kb/s (default)
    Metadata:
      handler_name    : Core Media Audio
      encoder         : Lavc58.54.100 aac
frame=   14 fps=7.0 q=-1.0 Lsize=     928kB time=00:00:00.51 bitrate=14884.2kbits/s dup=0 drop=1 speed=0.255x    
video:922kB audio:5kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.194501%
[libx264 @ 0x7fcae4001e00] frame I:1     Avg QP:21.06  size:228519
[libx264 @ 0x7fcae4001e00] frame P:4     Avg QP:22.03  size: 85228
[libx264 @ 0x7fcae4001e00] frame B:9     Avg QP:22.88  size: 41537
[libx264 @ 0x7fcae4001e00] consecutive B-frames: 14.3%  0.0%  0.0% 85.7%
[libx264 @ 0x7fcae4001e00] mb I  I16..4: 27.6% 64.3%  8.1%
[libx264 @ 0x7fcae4001e00] mb P  I16..4:  9.1% 10.7%  0.2%  P16..4: 48.5%  7.3%  3.9%  0.0%  0.0%    skip:20.2%
[libx264 @ 0x7fcae4001e00] mb B  I16..4:  1.1%  1.0%  0.0%  B16..8: 44.5%  2.9%  0.2%  direct: 8.3%  skip:42.0%  L0:45.6% L1:53.2% BI: 1.2%
[libx264 @ 0x7fcae4001e00] 8x8 transform intra:58.2% inter:93.4%
[libx264 @ 0x7fcae4001e00] coded y,uvDC,uvAC intra: 31.4% 62.2% 5.2% inter: 11.4% 30.9% 0.0%
[libx264 @ 0x7fcae4001e00] i16 v,h,dc,p: 15% 52% 12% 21%
[libx264 @ 0x7fcae4001e00] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 19% 33% 32%  2%  2%  2%  4%  2%  4%
[libx264 @ 0x7fcae4001e00] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 20% 39%  9%  3%  4%  4% 12%  3%  4%
[libx264 @ 0x7fcae4001e00] i8c dc,h,v,p: 43% 36% 18%  3%
[libx264 @ 0x7fcae4001e00] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0x7fcae4001e00] ref P L0: 69.3%  8.0% 14.8%  7.9%
[libx264 @ 0x7fcae4001e00] ref B L0: 88.1%  9.2%  2.6%
[libx264 @ 0x7fcae4001e00] ref B L1: 90.2%  9.8%
[libx264 @ 0x7fcae4001e00] kb/s:13475.29
[aac @ 0x7fcae4012400] Qavg: 125.000```


    


  • 5 perfect feature combinations to use with Heatmaps and Session Recordings

    28 janvier 2020, par Jake Thornton — Uncategorized

    Gaining valuable insights by simply creating a heatmap or setting up recordings on your most important web pages is a good start, but using the Heatmaps and Session Recordings features in combination with other Matomo features is where the real magic happens.

    If you’re serious about significantly increasing conversions on your website to impact your bottom line, you need to accurately answer these questions :

    With Matomo Analytics, you have the ability to integrate heatmaps and session recordings with all the features of a powerful web analytics platform, which means you get the complete picture of your visitor’s experience of your website.

    Here are five features that work with Heatmaps and Session Recordings to maximise conversions :

    1. Behaviour feature with Heatmaps

    Before creating heatmaps on pages you think are most important to your website, first check out Behaviour – Pages. Here you get valuable information around unique pageviews, bounce rate, average time on page and exit rates for every page on your website.

    Use this data as your starting point for heatmaps. Here you’ll identify current pain points for your visitors before using heatmaps to analyse their interactions on these pages.

    Here’s how to use the behaviour feature to determine which pages to setup heatmaps on :

    • Make sure you know what pages are generating the most unique page views, it could be your blog rather than your homepage
    • Which pages have the highest bounce rates – can you make some quick changes above-the-fold and see if this makes a difference
    • When the average time on page is high, why are visitors so engaged with these pages ? What keeps them reading ? Setup a heatmap to learn more
    • Reduce exit rates by moving them along to other pages on your website
    • Determine some milestones you want to achieve e.g. use heatmaps as your visual guide to improve average time on page, bounce rates and exit rates. A milestone could be that the exit rate for your previous blog was 34%, work towards getting this down to 30%

    2. Ecommerce feature and Custom Segments

    If you run an ecommerce business, you may want to learn only about visitors who are more likely to be your customers. For example, if you find 65% of product sales come from customers based in New York, but visits to your product pages are from every state in the USA, how can you learn more specifically about visitors only from New York ?

    Using Segments to target a particular audience :

    • First, make sure you have created heatmaps and recordings on the popular product pages you want to learn about your visitor’s interactions
    • Note : Make sure the segment you create generates enough pageviews to apply a heatmap for more accurate results. We recommend a minimum of 1,000 page views per sample size.
    • Then create a custom Segment – search Ecommerce and find the Product Name and select the product. Learn how to do this here.

    Click on ‘Add a new segment’ or on the ‘edit’ link next to an existing segment name to open the segment editor :

    Click on any item on the left to see the list of information you can segment by. In this case search “City”, then select “Is” and in the third column search “New York” (example in the image above) :

    You can also use the search box at the bottom to search through the whole list.

    • This will give you insights across the Matomo platform based only on customers who purchased this product
    • Then go to the Ecommerce feature – and find Sales. Here you will learn what your most popular locations are for your product sales.
    • Once you know the location you want to segment, go back and update the custom Segment you just created. Click on the edit pencil icon and update it by selecting Add AND condition, and add the sub group you would like to track on the product page. In this example, select City – New York. Click Save & Apply.

    Now you should have successfully created a segment for your popular product page with visitors only from New York.

    Check out the heatmap or recordings you created for this page. You may be very surprised to see how this segment engaged with your website compared to all website visitors.

    Note : If you run a lead generation website you can use the Goals feature instead of Ecommerce to track the success metrics you need.

    3. Visitor Profiles within Session Recordings

    Seeing visitor location, device, OS and browser for your recordings is very valuable, but it’s even more valuable to integrate visitor profiles with session recordings as you get to see everything that visitor has done on your website … ever ! 

    What pages they visited before/after the recording, what actions they took, how long they spent on your website etc. All this is captured in the visitor profile of every individual session recording so you can see where exactly engaged viewers are in their journey with your business, for example :

    • How has this visitor behaved on your website in the past ? 
    • Is this visitor already a customer ? 
    • Is this the visitors first time to your website and
    • What other pages on your website are they interested in seeing in this session ?

    Use the visitor profiles feature within session recordings to understand the users better when watching each session.

    You get the full picture of what role the page you recorded played in the overall experience of your website’s visitor. And more importantly, to see if they took the desired action you wanted them to take.

    4. Funnels feature (premium feature)

    The Funnels feature lets you see the customer journey from the first entry page through to the conversion page.

    Once you create a funnel, you can see the % of visitors who drop off between pages on their way to converting.

    In our example, you may then see page one to page two has a drop-off rate of 47%. Page two to page three 95% users drop-off rate and page three to page four 97.3% users drop-off rate.

    Why is the drop-off rate so high from page two to page three and why is the drop-off rate so low from page three to page four ?

    So, you may need to simplify things on page one because you may unknowingly be offering your visitor an easy way out of the funnel. Maybe the visitor is stuck reading your content and not understanding the value of your offering.

    Small tip for session recordings …

    With session recordings especially you can see firsthand through live recordings where exactly visitors click away from the page which exits them from your conversion funnel. Take note to see if this is a recurring issue with other visitors, then take action into fixing this hole.

    Whatever the case, work towards reducing drop-off rates through your conversion funnels by discovering where the problems exist, make changes and learn how these changes affect engagement through heatmaps and recordings.

    5. A/B Testing feature (premium feature)

    Following on from the example with the Funnels feature, once you identify there is a problem in your conversion funnel, how do you know what is preventing visitors from taking an action that pushes them to the next page in the funnel ? You need to test different variations of content to see what works best for your visitors.

    A/B Testing lets you test a variety of things, including :

    • different headlines 
    • less copy vs more copy 
    • different call-to-actions
    • different colour schemes
    • entirely different page layouts

    Once you’ve created two or more variations of specific landing pages in the conversion funnel, see how visitors interacted differently between the variations of landing pages through your heatmaps and recordings.

    You may see that your visitors have scrolled further down the page because more content was provided or an important CTA button was clicked more due to a colour change. Whatever the case, using A/B testing with heatmaps and session recordings is an effective combination for increasing user engagement.

    The conversion rate optimization (CRO) strategy

    CRO is the process of learning what the most valuable content/aspect of your website is and how to best optimize this for your visitors to increase conversion chances. 

    Heatmaps and session recordings play a vital role in this strategy, but it’s how you work these features in tandem with other valuable Matomo features that will give you the most actionable insights you need to grow your business.

    Want to learn how to create an effective CRO strategy ?