Recherche avancée

Médias (91)

Autres articles (46)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Taille des images et des logos définissables

    9 février 2011, par

    Dans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
    Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...)

Sur d’autres sites (3500)

  • Adventures in Unicode

    29 novembre 2012, par Multimedia Mike — Programming, php, Python, sqlite3, unicode

    Tangential to multimedia hacking is proper metadata handling. Recently, I have gathered an interest in processing a large corpus of multimedia files which are likely to contain metadata strings which do not fall into the lower ASCII set. This is significant because the lower ASCII set intersects perfectly with my own programming comfort zone. Indeed, all of my programming life, I have insisted on covering my ears and loudly asserting “LA LA LA LA LA ! ALL TEXT EVERYWHERE IS ASCII !” I suspect I’m not alone in this.

    Thus, I took this as an opportunity to conquer my longstanding fear of Unicode. I developed a self-learning course comprised of a series of exercises which add up to this diagram :



    Part 1 : Understanding Text Encoding
    Python has regular strings by default and then it has Unicode strings. The latter are prefixed by the letter ‘u’. This is what ‘ö’ looks like encoded in each type.

    1. >>> ’ö’, u’ö’
    2. (\xc3\xb6’, u\xf6’)

    A large part of my frustration with Unicode comes from Python yelling at me about UnicodeDecodeErrors and an inability to handle the number 0xc3 for some reason. This usually comes when I’m trying to wrap my head around an unrelated problem and don’t care to get sidetracked by text encoding issues. However, when I studied the above output, I finally understood where the 0xc3 comes from. I just didn’t understand what the encoding represents exactly.

    I can see from assorted tables that ‘ö’ is character 0xF6 in various encodings (in Unicode and Latin-1), so u’\xf6′ makes sense. But what does ‘\xc3\xb6′ mean ? It’s my style to excavate straight down to the lowest levels, and I wanted to understand exactly how characters are represented in memory. The UTF-8 encoding tables inform us that any Unicode code point above 0x7F but less than 0×800 will be encoded with 2 bytes :

     110xxxxx 10xxxxxx
    

    Applying this pattern to the \xc3\xb6 encoding :

                hex : 0xc3      0xb6
               bits : 11000011  10110110
     important bits : ---00011  —110110
          assembled : 00011110110
         code point : 0xf6
    

    I was elated when I drew that out and made the connection. Maybe I’m the last programmer to figure this stuff out. But I’m still happy that I actually understand those Python errors pertaining to the number 0xc3 and that I won’t have to apply canned solutions without understanding the core problem.

    I’m cheating on this part of this exercise just a little bit since the diagram implied that the Unicode text needs to come from a binary file. I’ll return to that in a bit. For now, I’ll just contrive the following Unicode string from the Python REPL :

    1. >>> u = u’Üñìçôđé’
    2. >>> u
    3. u\xdc\xf1\xec\xe7\xf4\u0111\xe9’

    Part 2 : From Python To SQLite3
    The next step is to see what happens when I use Python’s SQLite3 module to dump the string into a new database. Will the Unicode encoding be preserved on disk ? What will UTF-8 look like on disk anyway ?

    1. >>> import sqlite3
    2. >>> conn = sqlite3.connect(’unicode.db’)
    3. >>> conn.execute("CREATE TABLE t (t text)")
    4. >>> conn.execute("INSERT INTO t VALUES (?)", (u, ))
    5. >>> conn.commit()
    6. >>> conn.close()

    Next, I manually view the resulting database file (unicode.db) using a hex editor and look for strings. Here we go :

    000007F0   02 29 C3 9C  C3 B1 C3 AC  C3 A7 C3 B4  C4 91 C3 A9
    

    Look at that ! It’s just like the \xc3\xf6 encoding we see in the regular Python strings.

    Part 3 : From SQLite3 To A Web Page Via PHP
    Finally, use PHP (love it or hate it, but it’s what’s most convenient on my hosting provider) to query the string from the database and display it on a web page, completing the outlined processing pipeline.

    1. < ?php
    2. $dbh = new PDO("sqlite:unicode.db") ;
    3. foreach ($dbh->query("SELECT t from t") as $row) ;
    4. $unicode_string = $row[’t’] ;
    5.  ?>
    6.  
    7. <html>
    8. <head><meta http-equiv="Content-Type" content="text/html ; charset=utf-8"></meta></head>
    9. <body><h1>< ?=$unicode_string ?></h1></body>
    10. </html>

    I tested the foregoing PHP script on 3 separate browsers that I had handy (Firefox, Internet Explorer, and Chrome) :



    I’d say that counts as success ! It’s important to note that the “meta http-equiv” tag is absolutely necessary. Omit and see something like this :



    Since we know what the UTF-8 stream looks like, it’s pretty obvious how the mapping is operating here : 0xc3 and 0xc4 correspond to ‘Ã’ and ‘Ä’, respectively. This corresponds to an encoding named ISO/IEC 8859-1, a.k.a. Latin-1. Speaking of which…

    Part 4 : Converting Binary Data To Unicode
    At the start of the experiment, I was trying to extract metadata strings from these binary multimedia files and I noticed characters like our friend ‘ö’ from above. In the bytestream, this was represented simply with 0xf6. I mistakenly believed that this was the on-disk representation of UTF-8. Wrong. Turns out it’s Latin-1.

    However, I still need to solve the problem of transforming such strings into Unicode to be shoved through the pipeline diagrammed above. For this experiment, I created a 9-byte file with the Latin-1 string ‘Üñìçôdé’ couched by 0′s, to simulate yanking a string out of a binary file. Here’s unicode.file :

    00000000   00 DC F1 EC  E7 F4 64 E9  00         ......d..
    

    (Aside : this experiment uses plain ‘d’ since the ‘đ’ with a bar through it doesn’t occur in Latin-1 ; shows up all over the place in Vietnamese, at least.)

    I’ve been mashing around Python code via the REPL, trying to get this string into a Unicode-friendly format. This is a successful method but it’s probably not the best :

    1. >>> import struct
    2. >>> f = open(’unicode.file’, ’r’).read()
    3. >>> u = u’’
    4. >>> for c in struct.unpack("B"*7, f[1 :8]) :
    5. ... u += unichr(c)
    6. ...
    7. >>> u
    8. u\xdc\xf1\xec\xe7\xf4d\xe9’
    9. >>> print u
    10. Üñìçôdé

    Conclusion
    Dealing with text encoding matters reminds me of dealing with integer endian-ness concerns. When you’re just dealing with one system, you probably don’t need to think too much about it because the system is usually handling everything consistently underneath the covers.

    However, when the data leaves one system and will be interpreted by another system, that’s when a programmer needs to be cognizant of matters such as integer endianness or text encoding.

  • Problems with Streaming a Multicast RTSP Stream with Live555

    16 juin 2014, par ALM865

    I am having trouble setting up a Multicast RTSP session using Live555. The examples included with Live555 are mostly irrelevant as they deal with reading in files and my code differs because it reads in encoded frames generated from a FFMPEG thread within my own program (no pipes, no saving to disk, it is genuinely passing pointers to memory that contain the encoded frames for Live555 to stream).

    My Live555 project that uses a custom Server Media Subsession so that I can receive data from an FFMPEG thread within my program (instead of Live555’s default reading from a file, yuk !). This is a requirement of my program as it reads in a GigEVision stream in one thread, sends the decoded raw RGB packets to the FFMPEG thread, which then in turn sends the encoded frames off to Live555 for RTSP streaming.

    For the life of me I can’t work out how to send the RTSP stream as multicast instead of unicast !

    Just a note, my program works perfectly at the moment streaming Unicast, so there is nothing wrong with my Live555 implementation (before you go crazy picking out irrelevant errors !). I just need to know how to modify my existing code to stream Multicast instead of Unicast.

    My program is way too big to upload and share so I’m just going to share the important bits :

    Live_AnalysingServerMediaSubsession.h

    #ifndef _ANALYSING_SERVER_MEDIA_SUBSESSION_HH
    #define _ANALYSING_SERVER_MEDIA_SUBSESSION_HH

    #include
    #include "Live_AnalyserInput.h"

    class AnalysingServerMediaSubsession: public OnDemandServerMediaSubsession {

    public:
     static AnalysingServerMediaSubsession*
     createNew(UsageEnvironment&amp; env, AnalyserInput&amp; analyserInput, unsigned estimatedBitrate,
           Boolean iFramesOnly = False,
               double vshPeriod = 5.0
               /* how often (in seconds) to inject a Video_Sequence_Header,
                  if one doesn't already appear in the stream */);

    protected: // we're a virtual base class
     AnalysingServerMediaSubsession(UsageEnvironment&amp; env, AnalyserInput&amp; AnalyserInput, unsigned estimatedBitrate, Boolean iFramesOnly, double vshPeriod);
     virtual ~AnalysingServerMediaSubsession();

    protected:
     AnalyserInput&amp; fAnalyserInput;
     unsigned fEstimatedKbps;

    private:
     Boolean fIFramesOnly;
     double fVSHPeriod;

     // redefined virtual functions
     virtual FramedSource* createNewStreamSource(unsigned clientSessionId, unsigned&amp; estBitrate);
     virtual RTPSink* createNewRTPSink(Groupsock* rtpGroupsock, unsigned char rtpPayloadTypeIfDynamic, FramedSource* inputSource);

    };

    #endif

    And "Live_AnalysingServerMediaSubsession.cpp"

    #include "Live_AnalysingServerMediaSubsession.h"
    #include
    #include
    #include

    AnalysingServerMediaSubsession* AnalysingServerMediaSubsession::createNew(UsageEnvironment&amp; env, AnalyserInput&amp; wisInput, unsigned estimatedBitrate,
       Boolean iFramesOnly,
       double vshPeriod) {
           return new AnalysingServerMediaSubsession(env, wisInput, estimatedBitrate,
               iFramesOnly, vshPeriod);
    }

    AnalysingServerMediaSubsession
       ::AnalysingServerMediaSubsession(UsageEnvironment&amp; env, AnalyserInput&amp; analyserInput,   unsigned estimatedBitrate, Boolean iFramesOnly, double vshPeriod)
       : OnDemandServerMediaSubsession(env, True /*reuse the first source*/),

       fAnalyserInput(analyserInput), fIFramesOnly(iFramesOnly), fVSHPeriod(vshPeriod) {
           fEstimatedKbps = (estimatedBitrate + 500)/1000;

    }

    AnalysingServerMediaSubsession
       ::~AnalysingServerMediaSubsession() {
    }

    FramedSource* AnalysingServerMediaSubsession ::createNewStreamSource(unsigned /*clientSessionId*/, unsigned&amp; estBitrate) {
       estBitrate = fEstimatedKbps;

       // Create a framer for the Video Elementary Stream:
       //LOG_MSG("Create Net Stream Source [%d]", estBitrate);

       return MPEG1or2VideoStreamDiscreteFramer::createNew(envir(), fAnalyserInput.videoSource());
    }

    RTPSink* AnalysingServerMediaSubsession ::createNewRTPSink(Groupsock* rtpGroupsock, unsigned char /*rtpPayloadTypeIfDynamic*/, FramedSource* /*inputSource*/) {
       setVideoRTPSinkBufferSize();
       /*
       struct in_addr destinationAddress;
       destinationAddress.s_addr = inet_addr("239.255.12.42");

       rtpGroupsock->addDestination(destinationAddress,8888);
       rtpGroupsock->multicastSendOnly();
       */
       return MPEG1or2VideoRTPSink::createNew(envir(), rtpGroupsock);
    }

    Live_AnalyserSouce.h

    #ifndef _ANALYSER_SOURCE_HH
    #define _ANALYSER_SOURCE_HH

    #ifndef _FRAMED_SOURCE_HH
    #include "FramedSource.hh"
    #endif

    class FFMPEG;

    // The following class can be used to define specific encoder parameters
    class AnalyserParameters {
    public:
     FFMPEG * Encoding_Source;
    };

    class AnalyserSource: public FramedSource {
    public:
     static AnalyserSource* createNew(UsageEnvironment&amp; env, FFMPEG * E_Source);
     static unsigned GetRefCount();


    public:
     static EventTriggerId eventTriggerId;

    protected:
     AnalyserSource(UsageEnvironment&amp; env, FFMPEG *  E_Source);
     // called only by createNew(), or by subclass constructors
     virtual ~AnalyserSource();

    private:
     // redefined virtual functions:
     virtual void doGetNextFrame();

    private:
     static void deliverFrame0(void* clientData);
     void deliverFrame();


    private:
     static unsigned referenceCount; // used to count how many instances of this class currently exist
     FFMPEG * Encoding_Source;

     unsigned int Last_Sent_Frame_ID;
    };

    #endif

    Live_AnalyserSource.cpp

    #include "Live_AnalyserSource.h"
    #include  // for "gettimeofday()"
    #include "FFMPEGClass.h"

    AnalyserSource* AnalyserSource::createNew(UsageEnvironment&amp; env, FFMPEG * E_Source) {
     return new AnalyserSource(env, E_Source);
    }


    EventTriggerId AnalyserSource::eventTriggerId = 0;

    unsigned AnalyserSource::referenceCount = 0;

    AnalyserSource::AnalyserSource(UsageEnvironment&amp; env, FFMPEG * E_Source) : FramedSource(env), Encoding_Source(E_Source) {
     if (referenceCount == 0) {
       // Any global initialization of the device would be done here:

     }
     ++referenceCount;

     // Any instance-specific initialization of the device would be done here:
     Last_Sent_Frame_ID = 0;

     /* register us with the Encoding thread so we'll get notices when new frame data turns up.. */
     Encoding_Source->RegisterRTSP_Source(&amp;(env.taskScheduler()), this);

     // We arrange here for our "deliverFrame" member function to be called
     // whenever the next frame of data becomes available from the device.
     //
     // If the device can be accessed as a readable socket, then one easy way to do this is using a call to
     //     envir().taskScheduler().turnOnBackgroundReadHandling( ... )
     // (See examples of this call in the "liveMedia" directory.)
     //
     // If, however, the device *cannot* be accessed as a readable socket, then instead we can implement is using 'event triggers':
     // Create an 'event trigger' for this device (if it hasn't already been done):
     if (eventTriggerId == 0) {
       eventTriggerId = envir().taskScheduler().createEventTrigger(deliverFrame0);
     }
    }

    AnalyserSource::~AnalyserSource() {
     // Any instance-specific 'destruction' (i.e., resetting) of the device would be done here:

     /* de-register this source from the Encoding thread, since we no longer need notices.. */
     Encoding_Source->Un_RegisterRTSP_Source(this);

     --referenceCount;
     if (referenceCount == 0) {
       // Any global 'destruction' (i.e., resetting) of the device would be done here:

       // Reclaim our 'event trigger'
       envir().taskScheduler().deleteEventTrigger(eventTriggerId);
       eventTriggerId = 0;
     }

    }

    unsigned AnalyserSource::GetRefCount() {
     return referenceCount;
    }

    void AnalyserSource::doGetNextFrame() {
     // This function is called (by our 'downstream' object) when it asks for new data.
     //LOG_MSG("Do Next Frame..");
     // Note: If, for some reason, the source device stops being readable (e.g., it gets closed), then you do the following:
     //if (0 /* the source stops being readable */ /*%%% TO BE WRITTEN %%%*/) {
     unsigned int FrameID = Encoding_Source->GetFrameID();
     if (FrameID == 0){
       //LOG_MSG("No Data. Close");
       handleClosure(this);
       return;
     }



     // If a new frame of data is immediately available to be delivered, then do this now:
     if (Last_Sent_Frame_ID != FrameID){
       deliverFrame();
       //DEBUG_MSG("Frame ID: %d",FrameID);
     }

     // No new data is immediately available to be delivered.  We don't do anything more here.
     // Instead, our event trigger must be called (e.g., from a separate thread) when new data becomes available.
    }

    void AnalyserSource::deliverFrame0(void* clientData) {
     ((AnalyserSource*)clientData)->deliverFrame();
    }

    void AnalyserSource::deliverFrame() {

     if (!isCurrentlyAwaitingData()) return; // we're not ready for the data yet


     static u_int8_t* newFrameDataStart;
     static unsigned newFrameSize = 0;

     /* get the data frame from the Encoding thread.. */
     if (Encoding_Source->GetFrame(&amp;newFrameDataStart, &amp;newFrameSize, &amp;Last_Sent_Frame_ID)){
       if (newFrameDataStart!=NULL) {
           /* This should never happen, but check anyway.. */
           if (newFrameSize > fMaxSize) {
             fFrameSize = fMaxSize;
             fNumTruncatedBytes = newFrameSize - fMaxSize;
           } else {
             fFrameSize = newFrameSize;
           }
           gettimeofday(&amp;fPresentationTime, NULL); // If you have a more accurate time - e.g., from an encoder - then use that instead.
           // If the device is *not* a 'live source' (e.g., it comes instead from a file or buffer), then set "fDurationInMicroseconds" here.
           /* move the data to be sent off.. */
           memmove(fTo, newFrameDataStart, fFrameSize);

           /* release the Mutex we had on the Frame's buffer.. */
           Encoding_Source->ReleaseFrame();
       }
       else {
           //AM Added, something bad happened
           //ALTRACE("LIVE555: FRAME NULL\n");
           fFrameSize=0;
           fTo=NULL;
           handleClosure(this);
       }
     }
     else {
       //LOG_MSG("Closing Connection due to Frame Error..");
       handleClosure(this);
     }


     // After delivering the data, inform the reader that it is now available:
     FramedSource::afterGetting(this);
    }

    Live_AnalyserInput.cpp

    #include "Live_AnalyserInput.h"
    #include "Live_AnalyserSource.h"


    ////////// WISInput implementation //////////

    AnalyserInput* AnalyserInput::createNew(UsageEnvironment&amp; env, FFMPEG *Encoder) {
     if (!fHaveInitialized) {
       //if (!initialize(env)) return NULL;
       fHaveInitialized = True;
     }

     return new AnalyserInput(env, Encoder);
    }


    FramedSource* AnalyserInput::videoSource() {
     if (fOurVideoSource == NULL || AnalyserSource::GetRefCount() == 0) {
       fOurVideoSource = AnalyserSource::createNew(envir(), m_Encoder);
     }
     return fOurVideoSource;
    }


    AnalyserInput::AnalyserInput(UsageEnvironment&amp; env, FFMPEG *Encoder): Medium(env), m_Encoder(Encoder) {
    }

    AnalyserInput::~AnalyserInput() {
     /* When we get destroyed, make sure our source is also destroyed.. */
     if (fOurVideoSource != NULL &amp;&amp; AnalyserSource::GetRefCount() != 0) {
       AnalyserSource::handleClosure(fOurVideoSource);
     }
    }




    Boolean AnalyserInput::fHaveInitialized = False;
    int AnalyserInput::fOurVideoFileNo = -1;
    FramedSource* AnalyserInput::fOurVideoSource = NULL;

    Live_AnalyserInput.h

    #ifndef _ANALYSER_INPUT_HH
    #define _ANALYSER_INPUT_HH

    #include
    #include "FFMPEGClass.h"


    class AnalyserInput: public Medium {
    public:
     static AnalyserInput* createNew(UsageEnvironment&amp; env, FFMPEG *Encoder);

     FramedSource* videoSource();

    private:
     AnalyserInput(UsageEnvironment&amp; env, FFMPEG *Encoder); // called only by createNew()
     virtual ~AnalyserInput();

    private:
     friend class WISVideoOpenFileSource;
     static Boolean fHaveInitialized;
     static int fOurVideoFileNo;
     static FramedSource* fOurVideoSource;
     FFMPEG *m_Encoder;
    };

    // Functions to set the optimal buffer size for RTP sink objects.
    // These should be called before each RTPSink is created.
    #define VIDEO_MAX_FRAME_SIZE 300000
    inline void setVideoRTPSinkBufferSize() { OutPacketBuffer::maxSize = VIDEO_MAX_FRAME_SIZE; }

    #endif

    And finally the relevant code from my Live555 worker thread that starts the whole process :

       Stop_RTSP_Loop=0;
       //  MediaSession     *ms;
       TaskScheduler    *scheduler;
       UsageEnvironment *env ;
       //  RTSPClient       *rtsp;
       //  MediaSubsession  *Video_Sub;

       char RTSP_Address[1024];
       RTSP_Address[0]=0x00;

       if (m_Encoder == NULL){
           //DEBUG_MSG("No Video Encoder registered for the RTSP Encoder");
           return 0;
       }

       scheduler = BasicTaskScheduler::createNew();
       env = BasicUsageEnvironment::createNew(*scheduler);

       UserAuthenticationDatabase* authDB = NULL;
    #ifdef ACCESS_CONTROL
       // To implement client access control to the RTSP server, do the following:

       if (m_Enable_Pass){
           authDB = new UserAuthenticationDatabase;
           authDB->addUserRecord(UserN, PassW);
       }
       ////////// authDB = new UserAuthenticationDatabase;
       ////////// authDB->addUserRecord((char*)"Admin", (char*)"Admin"); // replace these with real strings
       // Repeat the above with each <username>, <password> that you wish to allow
       // access to the server.
    #endif

       // Create the RTSP server:
       RTSPServer* rtspServer = RTSPServer::createNew(*env, 554, authDB);
       ServerMediaSession* sms;

       AnalyserInput* inputDevice;


       if (rtspServer == NULL) {
           TRACE("LIVE555: Failed to create RTSP server: %s\n", env->getResultMsg());
           return 0;
       }
       else {
           char const* descriptionString = "Session streamed by \"IMC Server\"";



           // Initialize the WIS input device:
           inputDevice = AnalyserInput::createNew(*env, m_Encoder);
           if (inputDevice == NULL) {
               TRACE("Live555: Failed to create WIS input device\n");
               return 0;
           }
           else {
               // A MPEG-1 or 2 video elementary stream:
               /* Increase the buffer size so we can handle the high res stream.. */
               OutPacketBuffer::maxSize = 300000;
               // NOTE: This *must* be a Video Elementary Stream; not a Program Stream
               sms = ServerMediaSession::createNew(*env, RTSP_Address, RTSP_Address, descriptionString);

               //sms->addSubsession(MPEG1or2VideoFileServerMediaSubsession::createNew(*env, inputFileName, reuseFirstSource, iFramesOnly));

               sms->addSubsession(AnalysingServerMediaSubsession::createNew(*env, *inputDevice, m_Encoder->Get_Bitrate()));
               //sms->addSubsession(WISMPEG1or2VideoServerMediaSubsession::createNew(sms->envir(), inputDevice, videoBitrate));

               rtspServer->addServerMediaSession(sms);

               //announceStream(rtspServer, sms, streamName, inputFileName);
               //LOG_MSG("Play this stream using the URL %s", rtspServer->rtspURL(sms));

           }
       }

       Stop_RTSP_Loop=0;

       for (;;)
       {
           /* The actual work is all carried out inside the LIVE555 Task scheduler */
           env->taskScheduler().doEventLoop(&amp;Stop_RTSP_Loop); // does not return

           if (mStop) {
               break;
           }
       }

       Medium::close(rtspServer); // will also reclaim "sms" and its "ServerMediaSubsession"s
       Medium::close(inputDevice);
    </password></username>
  • How to Choose the Optimal Multi-Touch Attribution Model for Your Organisation

    13 mars 2023, par Erin — Analytics Tips

    If you struggle to connect the dots on your customer journeys, you are researching the correct solution. 

    Multi-channel attribution models allow you to better understand the users’ paths to conversion and identify key channels and marketing assets that assist them.

    That said, each attribution model has inherent limitations, which make the selection process even harder.

    This guide explains how to choose the optimal multi-touch attribution model. We cover the pros and cons of popular attribution models, main evaluation criteria and how-to instructions for model implementation. 

    Pros and Cons of Different Attribution Models 

    Types of Attribution Models

    First Interaction 

    First Interaction attribution model (also known as first touch) assigns full credit to the conversion to the first channel, which brought in a lead. However, it doesn’t report other interactions the visitor had before converting.

    Marketers, who are primarily focused on demand generation and user acquisition, find the first touch attribution model useful to evaluate and optimise top-of-the-funnel (ToFU). 

    Pros 

    • Reflects the start of the customer journey
    • Shows channels that bring in the best-qualified leads 
    • Helps track brand awareness campaigns

    Cons 

    • Ignores the impact of later interactions at the middle and bottom of the funnel 
    • Doesn’t provide a full picture of users’ decision-making process 

    Last Interaction 

    Last Interaction attribution model (also known as last touch) shifts the entire credit allocation to the last channel before conversion. But it doesn’t account for the contribution of all other channels. 

    If your focus is conversion optimization, the last-touch model helps you determine which channels, assets or campaigns seal the deal for the prospect. 

    Pros 

    • Reports bottom-of-the-funnel events
    • Requires minimal data and configurations 
    • Helps estimate cost-per-lead or cost-per-acquisition

    Cons 

    • No visibility into assisted conversions and prior visitor interactions 
    • Overemphasise the importance of the last channel (which can often be direct traffic) 

    Last Non-Direct Interaction 

    Last Non-Direct attribution excludes direct traffic from the calculation and assigns the full conversion credit to the preceding channel. For example, a paid ad will receive 100% of credit for conversion if a visitor goes directly to your website to buy a product. 

    Last Non-Direct attribution provides greater clarity into the bottom-of-the-funnel (BoFU). events. Yet, it still under-reports the role other channels played in conversion. 

    Pros 

    • Improved channel visibility, compared to Last-Touch 
    • Avoids over-valuing direct visits
    • Reports on lead-generation efforts

    Cons 

    • Doesn’t work for account-based marketing (ABM) 
    • Devalues the quality over quantity of leads 

    Linear Model

    Linear attribution model assigns equal credit for a conversion to all tracked touchpoints, regardless of their impact on the visitor’s decision to convert.

    It helps you understand the full conversion path. But this model doesn’t distinguish between the importance of lead generation activities versus nurturing touches.

    Pros 

    • Focuses on all touch points associated with a conversion 
    • Reflects more steps in the customer journey 
    • Helps analyse longer sales cycles

    Cons 

    • Doesn’t accurately reflect the varying roles of each touchpoint 
    • Can dilute the credit if too many touchpoints are involved 

    Time Decay Model 

    Time decay models assumes that the closer a touchpoint is to the conversion, the greater its influence. Pre-conversion touchpoints get the highest credit, while the first ones are ranked lower (5%-5%-10%-15%-25%-30%).

    This model better reflects real-life customer journeys. However, it devalues the impact of brand awareness and demand-generation campaigns. 

    Pros 

    • Helps track longer sales cycles and reports on each touchpoint involved 
    • Allows customising the half-life of decay to improve reporting 
    • Promotes conversion optimization at BoFu stages

    Cons 

    • Can prompt marketers to curtail ToFU spending, which would translate to fewer qualified leads at lower stages
    • Doesn’t reflect highly-influential events at earlier stages (e.g., a product demo request or free account registration, which didn’t immediately lead to conversion)

    Position-Based Model 

    Position-Based attribution model (also known as the U-shaped model) allocates the biggest credit to the first and the last interaction (40% each). Then distributes the remaining 20% across other touches. 

    For many marketers, that’s the preferred multi-touch attribution model as it allows optimising both ToFU and BoFU channels. 

    Pros 

    • Helps establish the main channels for lead generation and conversion
    • Adds extra layers of visibility, compared to first- and last-touch attribution models 
    • Promotes budget allocation toward the most strategic touchpoints

    Cons 

    • Diminishes the importance of lead nurturing activities as more credit gets assigned to demand-gen and conversion-generation channels
    • Limited flexibility since it always assigns a fixed amount of credit to the first and last touchpoints, and the remaining credit is divided evenly among the other touchpoints

    How to Choose the Right Multi-Touch Attribution Model For Your Business 

    If you’re deciding which attribution model is best for your business, prepare for a heated discussion. Each one has its trade-offs as it emphasises or devalues the role of different channels and marketing activities.

    To reach a consensus, the best strategy is to evaluate each model against three criteria : Your marketing objectives, sales cycle length and data availability. 

    Marketing Objectives 

    Businesses generate revenue in many ways : Through direct sales, subscriptions, referral fees, licensing agreements, one-off or retainer services. Or any combination of these activities. 

    In each case, your marketing strategy will look different. For example, SaaS and direct-to-consumer (DTC) eCommerce brands have to maximise both demand generation and conversion rates. In contrast, a B2B cybersecurity consulting firm is more interested in attracting qualified leads (as opposed to any type of traffic) and progressively nurturing them towards a big-ticket purchase. 

    When selecting a multi-touch attribution model, prioritise your objectives first. Create a simple scoreboard, where your team ranks various channels and campaign types you rely on to close sales. 

    Alternatively, you can survey your customers to learn how they first heard about your company and what eventually triggered their conversion. Having data from both sides can help you cross-validate your assumptions and eliminate some biases. 

    Then consider which model would best reflect the role and importance of different channels in your sales cycle. Speaking of which….

    Sales Cycle Length 

    As shoppers, we spend less time deciding on a new toothpaste brand versus contemplating a new IT system purchase. Factors like industry, business model (B2C, DTC, B2B, B2BC), and deal size determine the average cycle length in your industry. 

    Statistically, low-ticket B2C sales can happen within just several interactions. The average B2B decision-making process can have over 15 steps, spread over several months. 

    That’s why not all multi-touch attribution models work equally well for each business. Time-decay suits better B2B companies, while B2C usually go for position-based or linear attribution. 

    Data Availability 

    Businesses struggle with multi-touch attribution model implementation due to incomplete analytics data. 

    Our web analytics tool captures more data than Google Analytics. That’s because we rely on a privacy-focused tracking mechanism, which allows you to collect analytics without showing a cookie consent banner in markets outside of Germany and the UK. 

    Cookie consent banners are mandatory with Google Analytics. Yet, almost 40% of global consumers reject it. This results in gaps in your analytics and subsequent inconsistencies in multi-touch attribution reports. With Matomo, you can compliantly collect more data for accurate reporting. 

    Some companies also struggle to connect collected insights to individual shoppers. With Matomo, you can cross-attribute users across browning sessions, using our visitors’ tracking feature

    When you already know a user’s identifier (e.g., full name or email address), you can track their on-site behaviours over time to better understand how they interact with your content and complete their purchases. Quick disclaimer, though, visitors’ tracking may not be considered compliant with certain data privacy laws. Please consult with a local authority if you have doubts. 

    How to Implement Multi-Touch Attribution

    Multi-touch attribution modelling implementation is like a “seek and find” game. You have to identify all significant touchpoints in your customers’ journeys. And sometimes also brainstorm new ways to uncover the missing parts. Then figure out the best way to track users’ actions at those stages (aka do conversion and events tracking). 

    Here’s a step-by-step walkthrough to help you get started. 

    Select a Multi-Touch Attribution Tool 

    The global marketing attribution software is worth $3.1 billion. Meaning there are plenty of tools, differing in terms of accuracy, sophistication and price.

    To make the right call prioritise five factors :

    • Available models : Look for a solution that offers multiple options and allows you to experiment with different modelling techniques or develop custom models. 
    • Implementation complexity : Some providers offer advanced data modelling tools for creating custom multi-touch attribution models, but offer few out-of-the-box modelling options. 
    • Accuracy : Check if the shortlisted tool collects the type of data you need. Prioritise providers who are less dependent on third-party cookies and allow you to identify repeat users. 
    • Your marketing stack : Some marketing attribution tools come with useful add-ons such as tag manager, heatmaps, form analytics, user session recordings and A/B testing tools. This means you can collect more data for multi-channel modelling with them instead of investing in extra software. 
    • Compliance : Ensure that the selected multi-attribution analytics software wouldn’t put you at risk of GDPR non-compliance when it comes to user privacy and consent to tracking/analysis. 

    Finally, evaluate the adoption costs. Free multi-channel analytics tools come with data quality and consistency trade-offs. Premium attribution tools may have “hidden” licensing costs and bill you for extra data integrations. 

    Look for a tool that offers a good price-to-value ratio (i.e., one that offers extra perks for a transparent price). 

    Set Up Proper Data Collection 

    Multi-touch attribution requires ample user data. To collect the right type of insights you need to set up : 

    • Website analytics : Ensure that you have all tracking codes installed (and working correctly !) to capture pageviews, on-site actions, referral sources and other data points around what users do on page. 
    • Tags : Add tracking parameters to monitor different referral channels (e.g., “facebook”), campaign types (e.g., ”final-sale”), and creative assets (e.g., “banner-1”). Tags help you get a clearer picture of different touchpoints. 
    • Integrations : To better identify on-site users and track their actions, you can also populate your attribution tool with data from your other tools – CRM system, A/B testing app, etc. 

    Finally, think about the ideal lookback window — a bounded time frame you’ll use to calculate conversions. For example, Matomo has a default windows of 7, 30 or 90 days. But you can configure a custom period to better reflect your average sales cycle. For instance, if you’re selling makeup, a shorter window could yield better results. But if you’re selling CRM software for the manufacturing industry, consider extending it.

    Configure Goals and Events 

    Goals indicate your main marketing objectives — more traffic, conversions and sales. In web analytics tools, you can measure these by tracking specific user behaviours. 

    For example : If your goal is lead generation, you can track :

    • Newsletter sign ups 
    • Product demo requests 
    • Gated content downloads 
    • Free trial account registration 
    • Contact form submission 
    • On-site call bookings 

    In each case, you can set up a unique tag to monitor these types of requests. Then analyse conversion rates — the percentage of users who have successfully completed the action. 

    To collect sufficient data for multi-channel attribution modelling, set up Goal Tracking for different types of touchpoints (MoFU & BoFU) and asset types (contact forms, downloadable assets, etc). 

    Your next task is to figure out how users interact with different on-site assets. That’s when Event Tracking comes in handy. 

    Event Tracking reports notify you about specific actions users take on your website. With Matomo Event Tracking, you can monitor where people click on your website, on which pages they click newsletter subscription links, or when they try to interact with static content elements (e.g., a non-clickable banner). 

    Using in-depth user behavioural reports, you can better understand which assets play a key role in the average customer journey. Using this data, you can localise “leaks” in your sales funnel and fix them to increase conversion rates.

    Test and Validated the Selected Model 

    A common challenge of multi-channel attribution modelling is determining the correct correlation and causality between exposure to touchpoints and purchases. 

    For example, a user who bought a discounted product from a Facebook ad would act differently than someone who purchased a full-priced product via a newsletter link. Their rate of pre- and post-sales exposure will also differ a lot — and your attribution model may not always accurately capture that. 

    That’s why you have to continuously test and tweak the selected model type. The best approach for that is lift analysis. 

    Lift analysis means comparing how your key metrics (e.g., revenue or conversion rates) change among users who were exposed to a certain campaign versus a control group. 

    In the case of multi-touch attribution modelling, you have to monitor how your metrics change after you’ve acted on the model recommendations (e.g., invested more in a well-performing referral channel or tried a new brand awareness Twitter ad). Compare the before and after ROI. If you see a positive dynamic, your model works great. 

    The downside of this approach is that you have to invest a lot upfront. But if your goal is to create a trustworthy attribution model, the best way to validate is to act on its suggestions and then test them against past results. 

    Conclusion

    A multi-touch attribution model helps you measure the impact of different channels, campaign types, and marketing assets on metrics that matter — conversion rate, sales volumes and ROI. 

    Using this data, you can invest budgets into the best-performing channels and confidently experiment with new campaign types. 

    As a Matomo user, you also get to do so without breaching customers’ privacy or compromising on analytics accuracy.

    Start using accurate multi-channel attribution in Matomo. Get your free 21-day trial now. No credit card required.