Recherche avancée

Médias (1)

Mot : - Tags -/copyleft

Autres articles (75)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (4497)

  • Capture from multiple streams concurrently, best way to do it and how to reduce CPU usage

    19 juin 2019, par DRONE_6969

    I am currently in the process of writing an application that will capture a lot of RTSP streams(in my case its 12) and display it on the QT widget. The problem arouses when I am going beyond around 6-7 streams, the CPU usage spikes and there is visible stutter.

    The reason why I think that it is not QT draw function is because I have done some checking to measure how much time it takes to draw an incoming image from camera and just sample images I had, it is always a lot less than 33 milliseconds(even if there are 12 widgets being updated).

    I also just ran opencv capture method without drawing and got pretty much the same CPU consumption as if I was drawing the frames (lost like 10% CPU at most and GPU usage went to zero).

    IMPORTANT : I am using RTSP stream which is a h264 stream.

    IF IT MATTERS MY SPECS :

    Intel Core i7-6700 @ 3.40GHZ(8 CPUS)
    Memory : 16gb
    GPU : Intel HD Graphics 530

    (Also I ran my code on a computer with dedicated Graphics card, it did eliminate some stutter but CPU usage is still pretty high)

    I am currently using OPENCV 4.1.0 with GSTREAMER enabled and built, I also have the OPENCV-WORLD version, there is no difference in performance.

    I have created a special class called Camera that holds its frame size constraints and various control functions as well stream function. The stream function is being ran on a separate thread, whenever stream() function is done with current frame it sends ready Mat via onNewFrame event I created which converts to QPixmap and updates widget’s lastImage variable. This way I can update image in a more thread safe way.

    I have tried to manipulate those VideoCapture.set() values, but it didn’t really help.

    This is my stream function (Ignore the bool return, it doesn’t do anything it is a remnant from couple of minutes ago when I was trying to use std::async) :

    bool Camera::stream() {
       /* This function is meant to run on a separate thread and fill up the buffer independantly of
       main stream thread */
       //cv::setNumThreads(100);
       /* Rules for these slightly changed! */
       Mat pre;  // Grab initial undoctored frame
       //pre = Mat::zeros(size, CV_8UC1);
       Mat frame; // Final modified frame
       frame = Mat::zeros(size, CV_8UC1);
       if (!pre.isContinuous()) pre = pre.clone();

       ipCam.open(streamUrl, CAP_FFMPEG);


       while (ipCam.isOpened() && capture) {
           // If camera is opened wel need to capture and process the frame
           try {
               auto start = std::chrono::system_clock::now();

               ipCam >> pre;

               if (pre.empty()) {
                   /* Check for blank frame, return error if there is a blank frame*/
                   cerr << id << ": ERROR! blank frame grabbed\n";
                   for (FrameListener* i : clients) {
                       i->onNotification(1); // Notify clients about this shit
                   }
                   break;
               }

               else {
                   // Only continue if frame not empty

                   if (pre.cols != size.width && pre.rows != size.height) {
                       resize(pre, frame, size);
                       pre.release();
                   }
                   else {
                       frame = pre;
                   }

                   dPacket* pack = new dPacket{id,&frame};
                   for (auto i : clients) {
                       i->onPNewFrame(pack);
                   }
                   frame.release();
                   delete pack;
               }
           }

           catch (int e) {
               cout << endl << "-----Exception during capture process! CODE " << e << endl;
           }
           // End camera manipulations
       }

       cout << "Camera timed out, or connection is closed..." << endl;
       if (tryResetConnection) {
           cout << "Reconnection flag is set, retrying after 3 seconds..." << endl;
           for (FrameListener* i : clients) {
               i->onNotification(-1); // Notify clients about this shit
           }
           this_thread::sleep_for(chrono::milliseconds(3000));
           stream();
       }

       return true;
    }

    This is my onPNewFrame function. The conversion is still being done on camera’s thread because it was called within stream() and therefore is within that scope(and I also checked) :

    void GLWidget::onPNewFrame(dPacket* inPack) {
       lastFlag = 0;

       if (bufferEnabled) {
           buffer.push(QPixmap::fromImage(toQImageFromPMat(inPack->frame)));
       }
       else {
           if (playing) {
               /* Only process if this widget is playing */
               frameProcessing = true;
               lastImage.convertFromImage(toQImageFromPMat(inPack->frame));
               frameProcessing = false;
           }
       }

       if (lastFlag != -1 && !lastImage.isNull()) {
           connecting = false;
       }
       else {
           connecting = true;
       }
    }

    This is my Mat to QImage :

    QImage GLWidget::toQImageFromPMat(cv::Mat* mat) {



       return QImage(mat->data, mat->cols, mat->rows, QImage::Format_RGB888).rgbSwapped();

    NOTE : not converting does not result in CPU boost (at least not a significant one).

    Minimal verifiable example

    This program is large. I am going to paste GLWidget.cpp and GLWidget.h as well as Camera.h and Camera.cpp. You can put GLWidget into anything just as long as you spawn more than 6 of it. Camera relies on the CamUtils, but it is possible to just paste url in videocapture

    I also supplied CamUtils, just in case

    Camera.h :

    #pragma once
    #include <iostream>
    #include <vector>
    #include <fstream>
    #include <map>
    #include <string>
    #include <sstream>
    #include <algorithm>
    #include "FrameListener.h"
    #include
    #include <thread>
    #include "CamUtils.h"
    #include <ctime>
    #include "dPacket.h"

    using namespace std;
    using namespace cv;

    class Camera
    {

       /*
           CLEANED UP!
           Camera now is only responsible for streaming and echoing captured frames.
           Frames are now wrapped into dPacket struct.
       */


    private:
       string id;
       vector clients;
       VideoCapture ipCam;
       string streamUrl;
       Size size;
       bool tryResetConnection = false;

       //TODO: Remove these as they are not going to be used going on:
       bool isPlaying = true;
       bool capture = true;

       //SECRET FEATURES:
       bool detect = false;


    public:
       Camera(string url, int width = 480, int height = 240, bool detect_=false);
       bool stream();
       void setReconnectable(bool newReconStatus);
       void addListener(FrameListener* client);
       vector<bool> getState();    // Returns current state: vector[0] stream state; vector[1] stream state; TODO: Remove this as this is no longer should control behaviour
       void killStream();
       bool getReconnectable();
    };

    </bool></ctime></thread></algorithm></sstream></string></map></fstream></vector></iostream>

    Camera.cpp

    #include "Camera.h"


    Camera::Camera(string url, int width, int height, bool detect_) // Default 240p
    {
       streamUrl = url; // Prepare url
       size = Size(width, height);
       detect = detect_;

    }

    void Camera::addListener(FrameListener* client) {
       clients.push_back(client);
    }


    /*
                   TEST CAMERAS(Paste into cameras.dViewer):
                   {"id":"96a73796-c129-46fc-9c01-40acd8ed7122","ip":"176.57.73.231","password":"null","username":"null"},
                   {"id":"96a73796-c129-46fc-9c01-40acd8ed7122","ip":"176.57.73.231","password":"null","username":"null"},
                   {"id":"96a73796-c129-46fc-9c01-40acd8ed7144","ip":"172.20.101.13","password":"admin","username":"root"}
                   {"id":"96a73796-c129-46fc-9c01-40acd8ed7144","ip":"172.20.101.13","password":"admin","username":"root"}

    */



    bool Camera::stream() {
       /* This function is meant to run on a separate thread and fill up the buffer independantly of
       main stream thread */
       //cv::setNumThreads(100);
       /* Rules for these slightly changed! */
       Mat pre;  // Grab initial undoctored frame
       //pre = Mat::zeros(size, CV_8UC1);
       Mat frame; // Final modified frame
       frame = Mat::zeros(size, CV_8UC1);
       if (!pre.isContinuous()) pre = pre.clone();

       ipCam.open(streamUrl, CAP_FFMPEG);

       while (ipCam.isOpened() &amp;&amp; capture) {
           // If camera is opened wel need to capture and process the frame
           try {
               auto start = std::chrono::system_clock::now();

               ipCam >> pre;

               if (pre.empty()) {
                   /* Check for blank frame, return error if there is a blank frame*/
                   cerr &lt;&lt; id &lt;&lt; ": ERROR! blank frame grabbed\n";
                   for (FrameListener* i : clients) {
                       i->onNotification(1); // Notify clients about this shit
                   }
                   break;
               }

               else {
                   // Only continue if frame not empty

                   if (pre.cols != size.width &amp;&amp; pre.rows != size.height) {
                       resize(pre, frame, size);
                       pre.release();
                   }
                   else {
                       frame = pre;
                   }

                   auto end = std::chrono::system_clock::now();
                   std::time_t ts = std::chrono::system_clock::to_time_t(end);
                   dPacket* pack = new dPacket{ id,&amp;frame};
                   for (auto i : clients) {
                       i->onPNewFrame(pack);
                   }
                   frame.release();
                   delete pack;
               }
           }

           catch (int e) {
               cout &lt;&lt; endl &lt;&lt; "-----Exception during capture process! CODE " &lt;&lt; e &lt;&lt; endl;
           }
           // End camera manipulations
       }

       cout &lt;&lt; "Camera timed out, or connection is closed..." &lt;&lt; endl;
       if (tryResetConnection) {
           cout &lt;&lt; "Reconnection flag is set, retrying after 3 seconds..." &lt;&lt; endl;
           for (FrameListener* i : clients) {
               i->onNotification(-1); // Notify clients about this shit
           }
           this_thread::sleep_for(chrono::milliseconds(3000));
           stream();
       }

       return true;
    }


    void Camera::killStream(){
       tryResetConnection = false;
       capture = false;
       ipCam.release();
    }

    void Camera::setReconnectable(bool reconFlag) {
       tryResetConnection = reconFlag;
    }

    bool Camera::getReconnectable() {
       return tryResetConnection;
    }

    vector<bool> Camera::getState() {
       vector<bool> states;
       states.push_back(isPlaying);
       states.push_back(ipCam.isOpened());
       return states;
    }



    </bool></bool>

    GLWidget.h :

    #ifndef GLWIDGET_H
    #define GLWIDGET_H

    #include <qopenglwidget>
    #include <qmouseevent>
    #include "FrameListener.h"
    #include "Camera.h"
    #include "FrameListener.h"
    #include
    #include "Camera.h"
    #include "CamUtils.h"
    #include
    #include "dPacket.h"
    #include <chrono>
    #include <ctime>
    #include
    #include "FullScreenVideo.h"
    #include <qmovie>
    #include "helper.h"
    #include <iostream>
    #include <qpainter>
    #include <qtimer>

    class Helper;

    class GLWidget : public QOpenGLWidget, public FrameListener
    {
       Q_OBJECT

    public:
       GLWidget(std::string camId, CamUtils *cUtils, int width, int height, bool denyFullScreen_ = false, bool detectFlag_=false, QWidget* parent = nullptr);
       void killStream();
       ~GLWidget();

    public slots:
       void animate();
       void setBufferEnabled(bool setState);
       void setCameraRetryConnection(bool setState);
       void GLUpdate();            // Call to update the widget
       void onRightClickMenu(const QPoint&amp; point);

    protected:
       void paintEvent(QPaintEvent* event) override;
       void onPNewFrame(dPacket* frame);
       void onNotification(int alert_code);


    private:
       // Objects and resourses
       Helper* helper;
       Camera* cam;
       CamUtils* camUtils;
       QTimer* timer; // Keep track of update
       QPixmap lastImage;
       QMovie* connMov;
       QMovie* test;

       QPixmap logo;

       // Control fields
       int width;
       int height;
       int camUtilsAddr;
       int elapsed;
       std::thread* camThread;
       std::string camId;
       bool denyFullScreen = false;
       bool playing = true;
       bool streaming = true;
       bool debug = false;
       bool connecting = true;
       int lastFlag = 0;


       // Debug fields
       std::chrono::high_resolution_clock::time_point lastFrameAt;
       std::chrono::high_resolution_clock::time_point now;
       std::chrono::duration<double> painTime; // time took to draw last frame

       //Buffer stuff
       std::queue<qpixmap> buffer;
       bool bufferEnabled = false;
       bool initialBuffer = false;
       bool buffering = true;
       bool frameProcessing = false;



       //Functions
       QImage toQImageFromPMat(cv::Mat* inFrame);
       void mousePressEvent(QMouseEvent* event) override;
       void drawImageGLLatest(QPainter* painter, QPaintEvent* event, int elapsed);
       void drawOnPaused(QPainter* painter, QPaintEvent* event, int elapsed);
       void drawOnStatus(int statusFlag, QPainter* painter, QPaintEvent* event, int elapsed);
    };

    #endif

    </qpixmap></double></qtimer></qpainter></iostream></qmovie></ctime></chrono></qmouseevent></qopenglwidget>

    GLWidget.cpp :

    #include "glwidget.h"
    #include <future>


    FullScreenVideo* fullScreen;

    GLWidget::GLWidget(std::string camId_, CamUtils* cUtils, int width_, int height_,  bool denyFullScreen_, bool detectFlag_, QWidget* parent)
       : QOpenGLWidget(parent), helper(helper)
    {
       cout &lt;&lt; "Player for CAMERA " &lt;&lt; camId_ &lt;&lt; endl;

       /* Underlying properties */
       camUtils = cUtils;
       cout &lt;&lt; "GLWidget Incoming CamUtils addr " &lt;&lt; camUtils &lt;&lt; endl;
       cout &lt;&lt; "GLWidget Set CamUtils addr " &lt;&lt; camUtils &lt;&lt; endl;
       camId = camId_;
       elapsed = 0;
       width = width_ + 5;
       height = height_ + 5;
       helper = new Helper();
       setFixedSize(width, height);
       denyFullScreen = denyFullScreen_;

       /* Camera capture thread */
       cam = new Camera(camUtils->getCameraStreamURL(camId), width_, height_, detectFlag_);
       cam->addListener(this);

       /* Sync states */
       vector<bool> initState = cam->getState();
       playing = initState[0];
       streaming = initState[1];
       cout &lt;&lt; "Initial states: " &lt;&lt; playing &lt;&lt; " " &lt;&lt; streaming &lt;&lt; endl;
       camThread = new std::thread(&amp;Camera::stream, cam);
       cout &lt;&lt; "================================================" &lt;&lt; endl;

       // Right click set up
       setContextMenuPolicy(Qt::CustomContextMenu);


       /* Loading gif */
       connMov = new QMovie("establishingConnection.gif");
       connMov->start();
       QString url = R"(RLC-logo.png)";
       logo = QPixmap(url);
       QTimer* timer = new QTimer(this);
       connect(timer, SIGNAL(timeout()), this, SLOT(GLUpdate()));
       timer->start(1000/30);
       playing = true;

    }

    /* SYSTEM */
    void GLWidget::animate()
    {
       elapsed = (elapsed + qobject_cast(sender())->interval()) % 1000;
       std::cout &lt;&lt; elapsed &lt;&lt; "\n";
    }


    void GLWidget::GLUpdate() {
       /* Process descisions before update call */
       if (bufferEnabled) {
           /* Process buffer before update */
           now = chrono::high_resolution_clock::now();
           std::chrono::duration timeSinceLastUpdate = now - lastFrameAt;
           if (timeSinceLastUpdate.count() > 25) {
               if (buffer.size() > 1 &amp;&amp; playing) {
                   lastImage.swap(buffer.front());
                   buffer.pop();
                   lastFrameAt = chrono::high_resolution_clock::now();
               }
           }
           //update(); // Update
       }
       else {
           /* No buffer */
       }
       repaint();
    }


    /* EVENTS */
    void GLWidget::onRightClickMenu(const QPoint&amp; point) {
       cout &lt;&lt; "Right click request got" &lt;&lt; endl;

       QPoint globPos = this->mapToGlobal(point);
       QMenu myMenu;

       if (!denyFullScreen) {
           myMenu.addAction("Open Full Screen");
       }
       myMenu.addAction("Toggle Debug Info");


       QAction* selected = myMenu.exec(globPos);

       if (selected) {
           string optiontxt = selected->text().toStdString();

           if (optiontxt == "Open Full Screen") {
               cout &lt;&lt; "Chose to open full screen of " &lt;&lt; camId &lt;&lt; endl;
               fullScreen = new FullScreenVideo(bufferEnabled, this);
               fullScreen->setUpView(camUtils, camId);
               fullScreen->show();
               playing = false;
           }

           if (optiontxt == "Toggle Debug Info") {
               cout &lt;&lt; "Chose to toggle debug of " &lt;&lt; camId &lt;&lt; endl;
               debug = !debug;
           }
       }
       else {
           cout &lt;&lt; "Chose nothing!" &lt;&lt; endl;
       }


    }



    void GLWidget::onPNewFrame(dPacket* inPack) {
       lastFlag = 0;

       if (bufferEnabled) {
           buffer.push(QPixmap::fromImage(toQImageFromPMat(inPack->frame)));
       }
       else {
           if (playing) {
               /* Only process if this widget is playing */
               frameProcessing = true;
               lastImage.convertFromImage(toQImageFromPMat(inPack->frame));
               frameProcessing = false;
           }
       }

       if (lastFlag != -1 &amp;&amp; !lastImage.isNull()) {
           connecting = false;
       }
       else {
           connecting = true;
       }
    }


    void GLWidget::onNotification(int alert) {
       lastFlag = alert;  
    }


    /* Paint events*/


    void GLWidget::paintEvent(QPaintEvent* event)
    {
       QPainter painter(this);

           if (lastFlag != 0 || connecting) {
               drawOnStatus(lastFlag, &amp;painter, event, elapsed);
           }
           else {

               /* Actual frame drawing */
               if (playing) {
                   if (!frameProcessing) {
                       drawImageGLLatest(&amp;painter, event, elapsed);
                   }
               }
               else {
                   drawOnPaused(&amp;painter, event, elapsed);
               }
           }
       painter.end();

    }


    /* DRAWING STUFF */

    void GLWidget::drawOnStatus(int statusFlag, QPainter* bgPaint, QPaintEvent* event, int elapsed) {

       QString str;
       QFont font("times", 15);
       bgPaint->eraseRect(QRect(0, 0, width, height));
       if (!lastImage.isNull()) {
           bgPaint->drawPixmap(QRect(0, 0, width, height), lastImage);
       }
       /* Test background painting */
       if (connecting) {
           string k = "Connecting to " + camUtils->getIp(camId);
           str.append(k.c_str());
       }
       else {
           switch (statusFlag) {
           case 1:
               str = "Blank frame received...";
               break;

           case -1:
               if (cam->getReconnectable()) {
                   str = "Connection lost, will try to reconnect.";
                   bgPaint->setOpacity(0.3);
               }
               else {
                   str = "Connection lost...";
                   bgPaint->setOpacity(0.3);
               }

               break;
           }
       }

       bgPaint->drawPixmap(QRect(0, 0, width, height), QPixmap::fromImage(connMov->currentImage()));
       bgPaint->setPen(Qt::red);
       bgPaint->setFont(font);
       QFontMetrics fm(font);
       const QRect kek(0, 0, fm.width(str), fm.height());
       QRect bound;
       bgPaint->setOpacity(1);
       bgPaint->drawText(bgPaint->viewport().width()/2 - kek.width()/2, bgPaint->viewport().height()/2 - kek.height(), str);

       bgPaint->drawPixmap(bgPaint->viewport().width() / 2 - logo.width()/2, height - logo.width() - 15, logo);

    }



    void GLWidget::drawOnPaused(QPainter* painter, QPaintEvent* event, int elapsed) {
       painter->eraseRect(0, 0, width, height);
       QFont font = painter->font();
       font.setPointSize(18);
       painter->setPen(Qt::red);
       QFontMetrics fm(font);
       QString str("Paused");
       painter->drawPixmap(QRect(0, 0, width, height),lastImage);
       painter->drawText(QPoint(painter->viewport().width() - fm.width(str), 50), str);

       if (debug) {
           QFont font = painter->font();
           font.setPointSize(25);
           painter->setPen(Qt::red);
           string camMess = "CAMID: " + camId;
           QString mess(camMess.c_str());
           string camIp = "IP: " + camUtils->getIp(camId);
           QString ipMess(camIp.c_str());
           QString bufferSize("Buffer size: " + QString::number(buffer.size()));
           QString lastFrameText("Last frame draw time: " + QString::number(painTime.count()) + "s");
           painter->drawText(QPoint(10, 50), mess);
           painter->drawText(QPoint(10, 60), ipMess);
           QString bufferState;
           if (bufferEnabled) {
               bufferState = QString("Experimental BUFFER is enabled!");
               QString currentBufferSize("Current buffer load: " + QString::number(buffer.size()));
               painter->drawText(QPoint(10, 80), currentBufferSize);
           }
           else {
               bufferState = QString("Experimental BUFFER is disabled!");
           }
           painter->drawText(QPoint(10, 70), bufferState);
           painter->drawText(QPoint(10, height - 25), lastFrameText);
       }
    }


    void GLWidget::drawImageGLLatest(QPainter* painter, QPaintEvent* event, int elapsed) {
       auto start = chrono::high_resolution_clock::now();
       painter->drawPixmap(QRect(0, 0, width, height), lastImage);
       if (debug) {
           QFont font = painter->font();
           font.setPointSize(25);
           painter->setPen(Qt::red);
           string camMess = "CAMID: " + camId;
           QString mess(camMess.c_str());
           string camIp = "IP: " + camUtils->getIp(camId);
           QString ipMess(camIp.c_str());
           QString bufferSize("Buffer size: " + QString::number(buffer.size()));
           QString lastFrameText("Last frame draw time: " + QString::number(painTime.count()) + "s");
           painter->drawText(QPoint(10, 50), mess);
           painter->drawText(QPoint(10, 60), ipMess);
           QString bufferState;
           if(bufferEnabled){
               bufferState = QString("Experimental BUFFER is enabled!");
               QString currentBufferSize("Current buffer load: " + QString::number(buffer.size()));
               painter->drawText(QPoint(10,80), currentBufferSize);
           }
           else {
               bufferState = QString("Experimental BUFFER is disabled!");
               QString currentBufferSize("Current buffer load: " + QString::number(buffer.size()));
               painter->drawText(QPoint(10, 80), currentBufferSize);
           }
           painter->drawText(QPoint(10, 70), bufferState);
           painter->drawText(QPoint(10, height - 25), lastFrameText);

       }
       auto end = chrono::high_resolution_clock::now();
       painTime = end - start;
    }



    /* END DRAWING STUFF */



    /* UI EVENTS */

    void GLWidget::mousePressEvent(QMouseEvent* e) {

       if (e->button() == Qt::LeftButton) {
           if (fullScreen == nullptr || !fullScreen->isVisible()) { // Do not unpause if window is opened
               playing = !playing;
           }
       }

       if (e->button() == Qt::RightButton) {
           onRightClickMenu(e->pos());
       }
    }



    /* Utilities */
    QImage GLWidget::toQImageFromPMat(cv::Mat* mat) {



       return QImage(mat->data, mat->cols, mat->rows, QImage::Format_RGB888).rgbSwapped();



    }

    /* State control */

    void GLWidget::killStream() {
       cam->killStream();
       camThread->join();
    }

    void GLWidget::setBufferEnabled(bool newBufferState) {
       cout &lt;&lt; "Player: " &lt;&lt; camId &lt;&lt; ", buffer state updated: " &lt;&lt; newBufferState &lt;&lt; endl;
       bufferEnabled = newBufferState;
       buffer.empty();
    }

    void GLWidget::setCameraRetryConnection(bool newState) {
       cam->setReconnectable(newState);
    }

    /* Destruction */
    GLWidget::~GLWidget() {
       cam->killStream();
       camThread->join();
    }
    </bool></future>

    CamUtils.h :

    #pragma once
    #include <iostream>
    #include <vector>
    #include <fstream>
    #include <map>
    #include <string>
    #include <sstream>
    #include <algorithm>
    #include <nlohmann></nlohmann>json.hpp>

    using namespace std;
    using json = nlohmann::json;

    class CamUtils
    {
    private:

       string camDb = "cameras.dViewer";
       map> cameraList; // Legacy
       json cameras;
       ofstream dbFile;
       bool dbExists(); // Always hard coded

       /* Old IMPLEMENTATION */
       void writeLineToDb_(const string&amp; content, bool append = false);
       void loadCameras_();

       /* JSON based */
       void loadCameras();

    public:
       CamUtils();
       string generateRandomString(size_t length);
       string getCameraStreamURL(string cameraId) const;
       string saveCamera(string ip, string username, string pass); // Return generated id
       vector<string> listAllCameraIds();
       string getIp(string cameraId);
    };


    </string></algorithm></sstream></string></map></fstream></vector></iostream>

    CamUtils.cpp :

    #include "CamUtils.h"
    #pragma comment(lib, "rpcrt4.lib")  // UuidCreate - Minimum supported OS Win 2000
    #include
    #include <iostream>

    CamUtils::CamUtils()
    {
       if (!dbExists()) {
           ofstream dbFile;
           dbFile.open(camDb);
           cameras["cameras"] = json::array();
           dbFile &lt;&lt; cameras &lt;&lt; std::endl;
           dbFile.close();

       }
       else {
           loadCameras();
       }
    }




    vector<string> CamUtils::listAllCameraIds() {
       vector<string> ids;
       cout &lt;&lt; "IN LIST " &lt;&lt; endl;
       for (auto&amp; cam : cameras["cameras"]) {
           ids.push_back(cam["id"].get<string>());
           //cout &lt;&lt; cam["id"].get<string>() &lt;&lt; std::endl;
       }
       return ids;
    }

    string CamUtils::getIp(string id) {
       vector<string> camDetails = cameraList[id];
       string ip = "NO IP WILL DISPLAYED UNTIL I FIGURE OUT A BUG";
       for (auto&amp; cam : cameras["cameras"]) {
           if (id == cam["id"]) {
               ip = cam["ip"].get<string>();
           }
       }

       return ip;
    }

    string CamUtils::getCameraStreamURL(string id) const {
       string url = "err"; // err is the default, it will be overwritten in case id is found, dont forget to check for it

       for (auto&amp; cam : cameras["cameras"]) {
           if (id == cam["id"]) {
               if (cam["username"].get<string>() == "null") {
                   url = "rtsp://" + cam["ip"].get<string>() + ":554/axis-media/media.amp?tcp";
               }
               else {
                   url = "rtsp://" + cam["username"].get<string>() + ":" + cam["password"].get<string>() + "@" + cam["ip"].get<string>() + ":554/axis-media/media.amp?streamprofile=720_30";
               }
           }
       }

       return url;  // Dont forget to check for err when using this shit
    }


    string CamUtils::saveCamera(string ip, string username, string password) {
       UUID uid;
       UuidCreate(&amp;uid);
       char* str;
       UuidToStringA(&amp;uid, (RPC_CSTR*)&amp;str);
       string id = str;
       cout &lt;&lt; "GEN: " &lt;&lt; id &lt;&lt; endl;
       json cam = json({}); //Create emtpy object
       cam["id"] = id;
       cam["ip"] = ip;
       cam["username"] = username;
       cam["password"] = password;
       cameras["cameras"].push_back(cam);
       std::ofstream out(camDb);
       out &lt;&lt; cameras &lt;&lt; std::endl;
       cout &lt;&lt; cameras["cameras"] &lt;&lt; endl;

       cout &lt;&lt; "Saved camera as " &lt;&lt; id &lt;&lt; endl;
       return id;
    }


    bool CamUtils::dbExists() {
       ifstream dbFile(camDb);
       return (bool)dbFile;
    }





    void CamUtils::loadCameras() {
       cout &lt;&lt; "Load call" &lt;&lt; endl;
       ifstream dbFile(camDb);
       string line;
       string wholeFile;

       while (std::getline(dbFile, line)) {
           cout &lt;&lt; line &lt;&lt; endl;
           wholeFile += line;
       }
       try {
           cameras = json::parse(wholeFile);
           //cout &lt;&lt; cameras["cameras"] &lt;&lt; endl;

       }
       catch (exception e) {
           cout &lt;&lt; e.what() &lt;&lt; endl;
       }
       dbFile.close();
    }










    /*
       LEGACY CODE, TO BE REMOVED!

    */



    void CamUtils::loadCameras_() {
       /*
           LEGACY CODE:
           This used to be the way to load cameras, but I moved on to JSON based configuration so this is no longer needed and will be removed soon
       */

       ifstream dbFile(camDb);
       string line;
       while (std::getline(dbFile, line)) {
           /*
               This function load camera data to the map:
               The order MUST be the following: 0:ID, 1:IP, 2:USERNAME, 3:PASSWORD.
               Always delimited with | no spaces between!
           */
           if (!line.empty()) {
               stringstream ss(line);
               string item;
               vector<string> splitString;

               while (std::getline(ss, item, '|')) {
                   splitString.push_back(item);
               }
               if (splitString.size() > 0) {
                   /* Dont even parse if the program didnt split right*/
                   //cout &lt;&lt; "Split string: " &lt;&lt; splitString.size() &lt;&lt; "\n";
                   for (int i = 0; i &lt; (splitString.size()); i++) cameraList[splitString[0]].push_back(splitString[i]);
               }
           }
       }
    }



    void CamUtils::writeLineToDb_(const string &amp; content, bool append) {
       ofstream dbFile;
       cout &lt;&lt; "Creating?";
       if (append) {
           dbFile.open(camDb, ios_base::app);
       }
       else {
           dbFile.open(camDb);
       }

       dbFile &lt;&lt; content.c_str() &lt;&lt; "\r\n";
       dbFile.flush();
    }

    /* JSON Reworx */




    string CamUtils::generateRandomString(size_t length)
    {
       const char* charmap = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
       const size_t charmapLength = strlen(charmap);
       auto generator = [&amp;]() { return charmap[rand() % charmapLength]; };
       string result;
       result.reserve(length);
       generate_n(back_inserter(result), length, generator);
       return result;
    }
    </string></string></string></string></string></string></string></string></string></string></string></string></iostream>

    End of example

    How would I go about decreasing CPU usage when dealing with large amount of streams ?

  • How to fix : Frame rate differs from source after encoding certain mp4 videos to wmv

    25 juin 2019, par hpenhp

    I’m having trouble with a usually working ffmpeg script for encoding mp4 and more to wmv. The goal is to keep it like the original, except for the resolution (smaller) and the format (wmv).

    When it usually works for a lot of videos, on some of them it doesn’t... and i would like to understand and avoid that.

    I tried several things, forcing the framerate for the output, using -vsync 2 option, -vsync 0 also.. noone of that fixed the issue.

    An other thing, worth adding, is that the fps value from media info is good (24 fps) on the output like the input, but through the M$ windows explorer, it says 30 frames per sec, and when you play the video you see that the text that was matching the input doesn’t match the output any more (because of the fps mismatching).

    Below the script i use :

    ffmpeg -threads auto -i input.mp4 -s 480*320 -b:v 1000k -vcodec msmpeg4 -acodec wmav2 -f asf output.wmv

    Here is the ffmpeg log :

    ffmpeg version 3.4.1-1~16.04.york0 Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.5) 20160609
     configuration: --prefix=/usr --extra-version='1~16.04.york0' --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
     WARNING: library configuration mismatch
     avcodec     configuration: --prefix=/usr --extra-version='1~16.04.york0' --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared --enable-version3 --disable-doc --disable-programs --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libtesseract --enable-libvo_amrwbenc
     libavutil      55. 78.100 / 55. 78.100
     libavcodec     57.107.100 / 57.107.100
     libavformat    57. 83.100 / 57. 83.100
     libavdevice    57. 10.100 / 57. 10.100
     libavfilter     6.107.100 /  6.107.100
     libavresample   3.  7.  0 /  3.  7.  0
     libswscale      4.  8.100 /  4.  8.100
     libswresample   2.  9.100 /  2.  9.100
     libpostproc    54.  7.100 / 54.  7.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from './test_2k-24__hq_960x540.mp4':
     Metadata:
       major_brand     : mp42
       minor_version   : 0
       compatible_brands: mp42mp41isomavc1
       creation_time   : 2019-05-27T14:37:50.000000Z
     Duration: 01:28:52.80, start: 0.000000, bitrate: 1232 kb/s
       Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, smpte170m), 960x540, 1091 kb/s, 24 fps, 24 tbr, 24 tbn, 48 tbc (default)
       Metadata:
         creation_time   : 2019-05-27T14:37:50.000000Z
         handler_name    : L-SMASH Video Handler
         encoder         : AVC Coding
       Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 137 kb/s (default)
       Metadata:
         creation_time   : 2019-05-27T14:37:50.000000Z
         handler_name    : L-SMASH Audio Handler
    Stream mapping:
     Stream #0:0 -> #0:0 (h264 (native) -> msmpeg4v3 (msmpeg4))
     Stream #0:1 -> #0:1 (aac (native) -> wmav2 (native))
    Press [q] to stop, [?] for help
    Output #0, asf, to '/nas/copiedetravail/test_2k-24__hq_960x540.mp4.wmv':
     Metadata:
       major_brand     : mp42
       minor_version   : 0
       compatible_brands: mp42mp41isomavc1
       WM/EncodingSettings: Lavf57.83.100
       Stream #0:0(und): Video: msmpeg4v3 (msmpeg4) (MP43 / 0x3334504D), yuv420p(progressive), 480x320, q=2-31, 1000 kb/s, 24 fps, 1k tbn, 24 tbc (default)
       Metadata:
         creation_time   : 2019-05-27T14:37:50.000000Z
         handler_name    : L-SMASH Video Handler
         encoder         : Lavc57.107.100 msmpeg4
       Side data:
         cpb: bitrate max/min/avg: 0/0/1000000 buffer size: 0 vbv_delay: -1
       Stream #0:1(und): Audio: wmav2 (a[1][0][0] / 0x0161), 48000 Hz, stereo, fltp, 128 kb/s (default)
       Metadata:
         creation_time   : 2019-05-27T14:37:50.000000Z
         handler_name    : L-SMASH Audio Handler
         encoder         : Lavc57.107.100 wmav2
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 1 dct coefficients to -127..127
       Last message repeated 31 times
    frame=  228 fps=0.0 q=2.0 size=     291kB time=00:00:09.45 bitrate= 252.4kbits/s speed=18.9x    
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 2 dct coefficients to -127..127
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 1 dct coefficients to -127..127
       Last message repeated 1 times
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 2 dct coefficients to -127..127
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 1 dct coefficients to -127..127
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 2 dct coefficients to -127..127
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 1 dct coefficients to -127..127
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 2 dct coefficients to -127..127
       Last message repeated 1 times
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 1 dct coefficients to -127..127
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 2 dct coefficients to -127..127
       Last message repeated 1 times
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 1 dct coefficients to -127..127
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 2 dct coefficients to -127..127
       Last message repeated 5 times
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 1 dct coefficients to -127..127
       Last message repeated 2 times
    frame=  449 fps=449 q=2.0 size=     663kB time=00:00:18.98 bitrate= 286.2kbits/s speed=  19x    
    frame=  629 fps=418 q=1.6 size=    1307kB time=00:00:26.16 bitrate= 409.2kbits/s speed=17.4x    
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 1 dct coefficients to -127..127
       Last message repeated 30 times
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 2 dct coefficients to -127..127
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 1 dct coefficients to -127..127
       Last message repeated 7 times
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 2 dct coefficients to -127..127
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 1 dct coefficients to -127..127
       Last message repeated 2 times
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 2 dct coefficients to -127..127
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 1 dct coefficients to -127..127
       Last message repeated 34 times
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 2 dct coefficients to -127..127
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 1 dct coefficients to -127..127
       Last message repeated 1 times
    frame=  821 fps=410 q=2.0 size=    1960kB time=00:00:34.26 bitrate= 468.7kbits/s speed=17.1x    
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 2 dct coefficients to -127..127
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 1 dct coefficients to -127..127
       Last message repeated 71 times
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 2 dct coefficients to -127..127
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 1 dct coefficients to -127..127
       Last message repeated 82 times
    frame= 1026 fps=409 q=2.0 size=    2695kB time=00:00:42.96 bitrate= 513.8kbits/s speed=17.1x    
    [msmpeg4 @ 0x5648e407cb60] warning, clipping 1 dct coefficients to -127..127
       Last message repeated 102 times
    frame= 1202 fps=400 q=1.6 size=    3754kB time=00:00:50.13 bitrate= 613.4kbits/s speed=16.7x  
    ......
    frame=127987 fps=301 q=2.0 Lsize=  753842kB time=01:28:52.77 bitrate=1158.0kbits/s speed=12.6x    
    video:650704kB audio:83244kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 2.710554%

    EDIT :
    My workaround was to encode to MPEG, doing this, i’m able to read my file and the frame rate is same as source. If someone knows why it couldn’t keep the original framerate in wmv, i’m still interested to know.

  • Exceeded GA’s 10M hits data limit, now what ?

    21 juin 2019, par Joselyn Khor

    Exceeded GA’s 10M hits data limit, now what ? Matomo has the answers

    “Your data volume (1XXM hits) exceeds the limit of 10M hits per month as outlined in our Terms of Service. If you continue to exceed the limit, we will stop processing new data on XXX 21, 2019. Learn more about possible solutions.”

    Yikes. Alarm bells were ringing when a Google Analytics free user came to us faced with this notice. Let’s call him ‘Mark’. Mark had reached the limits on the data he could collect through Google Analytics and was shocked by the limited options available to fix the problem, without blowing the budget. The thoughts racing through his head were :

    • “What happens to all my data ?”
    • “What if Google starts charging USD150K now ?”

    Then he came across Matomo and decided to get in touch with our support team …

    “Can you fix this issue ?” he asked us.

    “Absolutely !” we said.

    We’ll get back to helping Mark in a minute. For now let’s go over why this was such a dilemma for him.

    In order to resolve this data limits issue, one of the solutions was for him to upgrade to Google Analytics 360, which meant shelling out USD150,000 per year for their 1 billion hits per month option. Going from free to USD150,000 was too much of a stretch for a growing company.

    “Your data volume (1XXM hits) exceeds the limit of 10M hits per month …”, what did this message mean ?

    With the free version, Mark could collect up to 10 million “hits” per month, per account. Going over meant Google Analytics could stop collecting any more data for free as outlined in their Terms.

    Google Analytics’ Terms of Service (2018, sec. 2) states, “Subject to Section 15, the Service is provided without charge to You for up to 10 million Hits per month per account.”[1]

    In general, what’s a "hit" ?

    Data being sent to Google Analytics. It can be a transaction, event, social interaction or pageview - these all produce what Google calls a “hit”.

    Google Analytics data limits
    Google Analytics Terms of Service

    And their Analytics Help Data Limits (n.d.) support page makes clear that : “If a property sends more hits per month to Analytics than allowed by the Analytics Terms of Service, there is no assurance that the excess hits will be processed. If the property’s hit volume exceeds this limit, a warning may be displayed in the user interface and you may be prevented from accessing reports.”[2]

    Google Analytics data collection limit
    Google Analytics’ data limits support page

    Possible solutions

    So the possible solutions given by Google Analytics’ Data Limits support page were (also shown in image below) :

    • To pay USD150K to upgrade to Google Analytics 360
    • To send fewer hits by setting up sampling
    • Or choose the slightly less relevant option to upgrade mobile app tracking to Google Analytics for Firebase.

    Without the means to pay, the free version was fast becoming inaccessible for Mark as he was facing a future where he risked no longer having access to up-to-date data used in his business’ reporting.

    Mark was facing a problem that potentially didn’t have a cost-effective solution.

    Google Analytics data limits
    Google Analytics’ data limits support page

    So what can you really do about it ?

    This is where we can help provide some assistance. If you’re reading this article, we’ll assume you can relate to Mark and share with you the advice on options we gave him.

    Options :

    One option posed by Google is for you to send fewer hits by auditing your data collection processes

    If you really don’t have the budget, you’ll need to reassess your data collection priorities and go over your strategies to see what is necessary to track, and what isn’t.

    • Make sure you know what you’re tracking and why. Look at what websites are being tracked by Google and into what properties.
    • Go through what data you’re tracking and decide what is or isn’t of value.
    • Set up data sampling, this however, will lead to inaccurate data.

    From here you can start to course correct. If you’ve found data you’re not using for analysis, get rid of these events/pageviews in your Google Analytics.

    But the limitations here are that eventually, you’re going to run out of irrelevant metrics and everything you’re tracking will be essential. So you’ll hit another brick wall and return to the same situation.

    Option 2 Ignore and continue using the free version of Google Analytics

    With this option, you’ll have to bear the business risks involved by basing decisions off of analytics reports that may or may not be updated. In this case, you may still get contacted about exceeding the limits. As the free service is provided for only up to 10 million hits, once you’ve gone over them, you’re violating what’s stipulated in the Terms of Service. 

    There’s also the warning that “… you may be prevented from accessing reports” (Data limits, n.d.). So while we may not know for certain what Google Analytics will do, in this case it may be better to be safe rather than sorry by acting quickly to resolve it. 

    Option 3 The Matomo solution. Upgrade to a web analytics platform that can handle your demanding data requirements

    Save money while continuing to gain valuable insights by moving over to Matomo Analytics (recommended)

    This is where you can save up to USD130,000 a year. As well as that, the transition from Google Analytics to the Matomo Cloud is a seamless experience as setup and maintenance is taken care of by our experts.

    For example, you can get up to 15M pageviews for USD1,612.50/month (or USD19,350/year) on the Essentials plan.

    Or even 25M pageviews for USD2400/month (or USD24000/year) on the Business plan – which offers additional web analytics and conversion optimization resources.

    Matomo Cloud is a great option if you’re looking for a secure, cost-effective and powerful analytics solution. You also get what Google Analytics could never offer you : full control and ownership of your own data and privacy. 

    No need to worry about losing your Google Analytics data because …

    Now you can import your historic Google Analytics data directly into your Matomo with the Google Analytics Importer tool. Simply follow the step-by-step guide to get started for free.

    Along with savings you can get :

    • A solution for the data limits issue forever. You choose the right plan to suit your data needs and adapt as you continue growing
    • 100% accurate data (no data sampling)
    • 100% data ownership of all your information without signing away your data to a third party
    • Powerful web analytics and conversion optimization features
    • Matomo Tag Manager
    • Easy setup
    • Support from Matomo’s specialists

    Learn more about Matomo Cloud pricing.

    Or go for Matomo On-Premise

    If you have the in-house infrastructure to support self-hosting Matomo on your own servers then there’s also the option of Matomo On-Premise. Here you’ll get full security knowing the data is on your own servers. 

    Setup will also require technical knowledge. There will also be costs associated with acquiring your own servers, and keeping up with regular maintenance and updates. With On-Premise you get maximum flexibility, with no data limits whatsoever. But if you’re coming over from Google Analytics and don’t have the infrastructure and team to host On-Premise, the Matomo Cloud could be right for you.

    Learn more about Matomo On-Premise.

    Where do you go from here ?

    Getting 10 millions hits per month is no small feat, it’s actually pretty fantastic. But if it means having to shell out USD150,000 just to be able to continue with Google Analytics, we feel your problem could be fixed with Matomo Cloud. You could then put the rest of the money you save to better use.

    If you choose Matomo, you now have the option to : 

    • Raise your data limits for a fraction of Google Analytics 360’s price
    • Get a comprehensive range of analytics features for the most impactful insights to ensure your website continues excelling
    • Get data that’s not sampled – meaning 100% accuracy in your reports
    • Migrate your data easily with the help of Matomo’s support team

    We’ll have you covered. 

    By sharing with you the options and advice we gave to Mark, we hope you’ll be able to find a solution that makes your life easier and solves the issue of data restrictions forever.

    The team at Matomo is here to help you every step of the way to ensure a stress-free transition from Google Analytics if that is what works best for you.

    For next steps, why not check out our pricing page to see what could suit your needs !

    References :

    [1] Terms of Service. (2018, July 24). In Google Analytics Terms of Service. Retrieved June 12, 2019, from https://www.google.com/analytics/terms/us.html

    [2] Data limits. (n.d.). In Analytics Help Data limits. Retrieved June 12, 2019, from https://support.google.com/analytics/answer/1070983?hl=en