Recherche avancée

Médias (1)

Mot : - Tags -/publicité

Autres articles (50)

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

  • Demande de création d’un canal

    12 mars 2010, par

    En fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
    Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...)

  • Mediabox : ouvrir les images dans l’espace maximal pour l’utilisateur

    8 février 2011, par

    La visualisation des images est restreinte par la largeur accordée par le design du site (dépendant du thème utilisé). Elles sont donc visibles sous un format réduit. Afin de profiter de l’ensemble de la place disponible sur l’écran de l’utilisateur, il est possible d’ajouter une fonctionnalité d’affichage de l’image dans une boite multimedia apparaissant au dessus du reste du contenu.
    Pour ce faire il est nécessaire d’installer le plugin "Mediabox".
    Configuration de la boite multimédia
    Dès (...)

Sur d’autres sites (3019)

  • How to Stream RTP (IP camera) Into React App setup

    10 novembre 2024, par sharon2469

    I am trying to transfer a live broadcast from an IP camera or any other broadcast coming from an RTP/RTSP source to my REACT application. BUT MUST BE LIVE

    


    My setup at the moment is :

    


    IP Camera -> (RTP) -> FFmpeg -> (udp) -> Server(nodeJs) -> (WebRTC) -> React app

    


    In the current situation, There is almost no delay, but there are some things here that I can't avoid and I can't understand why, and here is my question :

    


    1) First, is the SETUP even correct and this is the only way to Stream RTP video in Web app ?

    


    2) Is it possible to avoid re-encode the stream , RTP transmission necessarily comes in H.264, hence I don't really need to execute the following command :

    


        return spawn('ffmpeg', [
    '-re',                              // Read input at its native frame rate Important for live-streaming
    '-probesize', '32',                 // Set probing size to 32 bytes (32 is minimum)
    '-analyzeduration', '1000000',      // An input duration of 1 second
    '-c:v', 'h264',                     // Video codec of input video
    '-i', 'rtp://238.0.0.2:48888',      // Input stream URL
    '-map', '0:v?',                     // Select video from input stream
    '-c:v', 'libx264',                  // Video codec of output stream
    '-preset', 'ultrafast',             // Faster encoding for lower latency
    '-tune', 'zerolatency',             // Optimize for zero latency
    // '-s', '768x480',                    // Adjust the resolution (experiment with values)
    '-f', 'rtp', `rtp://127.0.0.1:${udpPort}` // Output stream URL
]);


    


    As you can se in this command I re-encode to libx264, But if I set FFMPEG a parameter '-c:v' :'copy' instead of '-c:v', 'libx264' then FFMPEG throw an error says : that it doesn't know how to encode h264 and only knows what is libx264-> Basically, I want to stop the re-encode because there is really no need for it, because the stream is already encoded to H264. Are there certain recommendations that can be made ?

    


    3) I thought about giving up the FFMPEG completely, but the RTP packets arrive at a size of 1200+ BYTES when WEBRTC is limited to up to 1280 BYTE. Is there a way to manage these sabotages without damaging the video and is it to enter this world ? I guess there is the whole story with the JITTER BUFFER here

    


    This is my server side code (THIS IS JUST A TEST CODE)

    


    import {
    MediaStreamTrack,
    randomPort,
    RTCPeerConnection,
    RTCRtpCodecParameters,
    RtpPacket,
} from 'werift'
import {Server} from "ws";
import {createSocket} from "dgram";
import {spawn} from "child_process";
import LoggerFactory from "./logger/loggerFactory";

//

const log = LoggerFactory.getLogger('ServerMedia')

// Websocket server -> WebRTC
const serverPort = 8888
const server = new Server({port: serverPort});
log.info(`Server Media start om port: ${serverPort}`);

// UDP server -> ffmpeg
const udpPort = 48888
const udp = createSocket("udp4");
// udp.bind(udpPort, () => {
//     udp.addMembership("238.0.0.2");
// })
udp.bind(udpPort)
log.info(`UDP port: ${udpPort}`)


const createFFmpegProcess = () => {
    log.info(`Start ffmpeg process`)
    return spawn('ffmpeg', [
        '-re',                              // Read input at its native frame rate Important for live-streaming
        '-probesize', '32',                 // Set probing size to 32 bytes (32 is minimum)
        '-analyzeduration', '1000000',      // An input duration of 1 second
        '-c:v', 'h264',                     // Video codec of input video
        '-i', 'rtp://238.0.0.2:48888',      // Input stream URL
        '-map', '0:v?',                     // Select video from input stream
        '-c:v', 'libx264',                  // Video codec of output stream
        '-preset', 'ultrafast',             // Faster encoding for lower latency
        '-tune', 'zerolatency',             // Optimize for zero latency
        // '-s', '768x480',                    // Adjust the resolution (experiment with values)
        '-f', 'rtp', `rtp://127.0.0.1:${udpPort}` // Output stream URL
    ]);

}

let ffmpegProcess = createFFmpegProcess();


const attachFFmpegListeners = () => {
    // Capture standard output and print it
    ffmpegProcess.stdout.on('data', (data) => {
        log.info(`FFMPEG process stdout: ${data}`);
    });

    // Capture standard error and print it
    ffmpegProcess.stderr.on('data', (data) => {
        console.error(`ffmpeg stderr: ${data}`);
    });

    // Listen for the exit event
    ffmpegProcess.on('exit', (code, signal) => {
        if (code !== null) {
            log.info(`ffmpeg process exited with code ${code}`);
        } else if (signal !== null) {
            log.info(`ffmpeg process killed with signal ${signal}`);
        }
    });
};


attachFFmpegListeners();


server.on("connection", async (socket) => {
    const payloadType = 96; // It is a numerical value that is assigned to each codec in the SDP offer/answer exchange -> for H264
    // Create a peer connection with the codec parameters set in advance.
    const pc = new RTCPeerConnection({
        codecs: {
            audio: [],
            video: [
                new RTCRtpCodecParameters({
                    mimeType: "video/H264",
                    clockRate: 90000, // 90000 is the default value for H264
                    payloadType: payloadType,
                }),
            ],
        },
    });

    const track = new MediaStreamTrack({kind: "video"});


    udp.on("message", (data) => {
        console.log(data)
        const rtp = RtpPacket.deSerialize(data);
        rtp.header.payloadType = payloadType;
        track.writeRtp(rtp);
    });

    udp.on("error", (err) => {
        console.log(err)

    });

    udp.on("close", () => {
        console.log("close")
    });

    pc.addTransceiver(track, {direction: "sendonly"});

    await pc.setLocalDescription(await pc.createOffer());
    const sdp = JSON.stringify(pc.localDescription);
    socket.send(sdp);

    socket.on("message", (data: any) => {
        if (data.toString() === 'resetFFMPEG') {
            ffmpegProcess.kill('SIGINT');
            log.info(`FFMPEG process killed`)
            setTimeout(() => {
                ffmpegProcess = createFFmpegProcess();
                attachFFmpegListeners();
            }, 5000)
        } else {
            pc.setRemoteDescription(JSON.parse(data));
        }
    });
});


    


    And this fronted :

    


    &#xA;&#xA;&#xA;    &#xA;    &#xA;    <code class="echappe-js">&lt;script&amp;#xA;            crossorigin&amp;#xA;            src=&quot;https://unpkg.com/react@16/umd/react.development.js&quot;&amp;#xA;    &gt;&lt;/script&gt;&#xA;    &lt;script&amp;#xA;            crossorigin&amp;#xA;            src=&quot;https://unpkg.com/react-dom@16/umd/react-dom.development.js&quot;&amp;#xA;    &gt;&lt;/script&gt;&#xA;    &lt;script&amp;#xA;            crossorigin&amp;#xA;            src=&quot;https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.34/browser.min.js&quot;&amp;#xA;    &gt;&lt;/script&gt;&#xA;    &lt;script src=&quot;https://cdn.jsdelivr.net/npm/babel-regenerator-runtime@6.5.0/runtime.min.js&quot;&gt;&lt;/script&gt;&#xA;&#xA;&#xA;
    &#xA;

    &#xA;

    &#xA;&lt;script type=&quot;text/babel&quot;&gt;&amp;#xA;    let rtc;&amp;#xA;&amp;#xA;    const App = () =&gt; {&amp;#xA;        const [log, setLog] = React.useState([]);&amp;#xA;        const videoRef = React.useRef();&amp;#xA;        const socket = new WebSocket(&quot;ws://localhost:8888&quot;);&amp;#xA;        const [peer, setPeer] = React.useState(null); // Add state to keep track of the peer connection&amp;#xA;&amp;#xA;        React.useEffect(() =&gt; {&amp;#xA;            (async () =&gt; {&amp;#xA;                await new Promise((r) =&gt; (socket.onopen = r));&amp;#xA;                console.log(&quot;open websocket&quot;);&amp;#xA;&amp;#xA;                const handleOffer = async (offer) =&gt; {&amp;#xA;                    console.log(&quot;new offer&quot;, offer.sdp);&amp;#xA;&amp;#xA;                    const updatedPeer = new RTCPeerConnection({&amp;#xA;                        iceServers: [],&amp;#xA;                        sdpSemantics: &quot;unified-plan&quot;,&amp;#xA;                    });&amp;#xA;&amp;#xA;                    updatedPeer.onicecandidate = ({ candidate }) =&gt; {&amp;#xA;                        if (!candidate) {&amp;#xA;                            const sdp = JSON.stringify(updatedPeer.localDescription);&amp;#xA;                            console.log(sdp);&amp;#xA;                            socket.send(sdp);&amp;#xA;                        }&amp;#xA;                    };&amp;#xA;&amp;#xA;                    updatedPeer.oniceconnectionstatechange = () =&gt; {&amp;#xA;                        console.log(&amp;#xA;                            &quot;oniceconnectionstatechange&quot;,&amp;#xA;                            updatedPeer.iceConnectionState&amp;#xA;                        );&amp;#xA;                    };&amp;#xA;&amp;#xA;                    updatedPeer.ontrack = (e) =&gt; {&amp;#xA;                        console.log(&quot;ontrack&quot;, e);&amp;#xA;                        videoRef.current.srcObject = e.streams[0];&amp;#xA;                    };&amp;#xA;&amp;#xA;                    await updatedPeer.setRemoteDescription(offer);&amp;#xA;                    const answer = await updatedPeer.createAnswer();&amp;#xA;                    await updatedPeer.setLocalDescription(answer);&amp;#xA;&amp;#xA;                    setPeer(updatedPeer);&amp;#xA;                };&amp;#xA;&amp;#xA;                socket.onmessage = (ev) =&gt; {&amp;#xA;                    const data = JSON.parse(ev.data);&amp;#xA;                    if (data.type === &quot;offer&quot;) {&amp;#xA;                        handleOffer(data);&amp;#xA;                    } else if (data.type === &quot;resetFFMPEG&quot;) {&amp;#xA;                        // Handle the resetFFMPEG message&amp;#xA;                        console.log(&quot;FFmpeg reset requested&quot;);&amp;#xA;                    }&amp;#xA;                };&amp;#xA;            })();&amp;#xA;        }, []); // Added socket as a dependency to the useEffect hook&amp;#xA;&amp;#xA;        const sendRequestToResetFFmpeg = () =&gt; {&amp;#xA;            socket.send(&quot;resetFFMPEG&quot;);&amp;#xA;        };&amp;#xA;&amp;#xA;        return (&amp;#xA;            &lt;div&gt;&amp;#xA;                Video: &amp;#xA;                &lt;video ref={videoRef} autoPlay muted /&gt;&amp;#xA;                &lt;button onClick={() =&gt; sendRequestToResetFFmpeg()}&gt;Reset FFMPEG&lt;/button&gt;&amp;#xA;            &lt;/div&gt;&amp;#xA;        );&amp;#xA;    };&amp;#xA;&amp;#xA;    ReactDOM.render(&lt;App /&gt;, document.getElementById(&quot;app1&quot;));&amp;#xA;&lt;/script&gt;&#xA;&#xA;&#xA;

    &#xA;

  • Sporadic "Error parsing Cues... Operation not permitted" errors when trying to generate a DASH manifest

    22 novembre 2023, par kshetline

    I have already-generated .webm audio and video files (1 audio, 3 video resolutions for each video I want to stream). The video has been generated not (directly) by ffmpeg, but HandbrakeCLI 1.7.0, with V9 encoding. The audio (which has never caused an error) is generated by ffmpeg using libvorbis.

    &#xA;

    Most of the time ffmpeg (version 6.1) creates a manifest without any problem. Sporadically, however, "Error parsing Cues" comes up (frequently with the latest videos I've been trying to process) and I can't create a manifest. Since this is happening during an automated process to process many videos for streaming, the audio and video sources are being created exactly the same way whether ffmpeg succeeds or fails in generating a manifest, making this all the more confusing.

    &#xA;

    The video files ffmpeg chokes on play perfectly well using VLC, and mediainfo doesn't show any problems with these files.

    &#xA;

    Here's the way I've been (sometimes successfully, sometimes not) generating a manifest, with extra logging added :

    &#xA;

    ffmpeg -v 9 -loglevel 99 \&#xA;  -f webm_dash_manifest -i &#x27;.\Sample Video.v480.webm&#x27; \&#xA;  -f webm_dash_manifest -i &#x27;.\Sample Video.v720.webm&#x27; \&#xA;  -f webm_dash_manifest -i &#x27;.\Sample Video.v1080.webm&#x27; \&#xA;  -f webm_dash_manifest -i &#x27;.\Sample Video.audio.webm&#x27; \&#xA;  -c copy -map 0 -map 1 -map 2 -map 3 \&#xA;  -f webm_dash_manifest -adaptation_sets "id=0,streams=0,1,2 id=1,streams=3" \&#xA;  &#x27;.\Sample Video.mpd&#x27;&#xA;

    &#xA;

    Here's the result when it fails :

    &#xA;

    ffmpeg version 6.1-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers&#xA;  built with gcc 12.2.0 (Rev10, Built by MSYS2 project)&#xA;  configuration: --enable-gpl --enable-version3 --enable-static --pkg-config=pkgconf --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-dxva2 --enable-d3d11va --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint&#xA;  libavutil      58. 29.100 / 58. 29.100&#xA;  libavcodec     60. 31.102 / 60. 31.102&#xA;  libavformat    60. 16.100 / 60. 16.100&#xA;  libavdevice    60.  3.100 / 60.  3.100&#xA;  libavfilter     9. 12.100 /  9. 12.100&#xA;  libswscale      7.  5.100 /  7.  5.100&#xA;  libswresample   4. 12.100 /  4. 12.100&#xA;  libpostproc    57.  3.100 / 57.  3.100&#xA;Splitting the commandline.&#xA;Reading option &#x27;-v&#x27; ... matched as option &#x27;v&#x27; (set logging level) with argument &#x27;9&#x27;.&#xA;Reading option &#x27;-loglevel&#x27; ... matched as option &#x27;loglevel&#x27; (set logging level) with argument &#x27;99&#x27;.&#xA;Reading option &#x27;-f&#x27; ... matched as option &#x27;f&#x27; (force format) with argument &#x27;webm_dash_manifest&#x27;.&#xA;Reading option &#x27;-i&#x27; ... matched as output url with argument &#x27;.\Sample Video.v480.webm&#x27;.&#xA;Reading option &#x27;-f&#x27; ... matched as option &#x27;f&#x27; (force format) with argument &#x27;webm_dash_manifest&#x27;.&#xA;Reading option &#x27;-i&#x27; ... matched as output url with argument &#x27;.\Sample Video.v720.webm&#x27;.&#xA;Reading option &#x27;-f&#x27; ... matched as option &#x27;f&#x27; (force format) with argument &#x27;webm_dash_manifest&#x27;.&#xA;Reading option &#x27;-i&#x27; ... matched as output url with argument &#x27;.\Sample Video.v1080.webm&#x27;.&#xA;Reading option &#x27;-f&#x27; ... matched as option &#x27;f&#x27; (force format) with argument &#x27;webm_dash_manifest&#x27;.&#xA;Reading option &#x27;-i&#x27; ... matched as output url with argument &#x27;.\Sample Video.audio.webm&#x27;.&#xA;Reading option &#x27;-c&#x27; ... matched as option &#x27;c&#x27; (codec name) with argument &#x27;copy&#x27;.&#xA;Reading option &#x27;-map&#x27; ... matched as option &#x27;map&#x27; (set input stream mapping) with argument &#x27;0&#x27;.&#xA;Reading option &#x27;-map&#x27; ... matched as option &#x27;map&#x27; (set input stream mapping) with argument &#x27;1&#x27;.&#xA;Reading option &#x27;-map&#x27; ... matched as option &#x27;map&#x27; (set input stream mapping) with argument &#x27;2&#x27;.&#xA;Reading option &#x27;-map&#x27; ... matched as option &#x27;map&#x27; (set input stream mapping) with argument &#x27;3&#x27;.&#xA;Reading option &#x27;-f&#x27; ... matched as option &#x27;f&#x27; (force format) with argument &#x27;webm_dash_manifest&#x27;.&#xA;Reading option &#x27;-adaptation_sets&#x27; ... matched as AVOption &#x27;adaptation_sets&#x27; with argument &#x27;id=0,streams=0,1,2 id=1,streams=3&#x27;.&#xA;Reading option &#x27;.\Sample Video.mpd&#x27; ... matched as output url.&#xA;Finished splitting the commandline.&#xA;Parsing a group of options: global .&#xA;Applying option v (set logging level) with argument 9.&#xA;Successfully parsed a group of options.&#xA;Parsing a group of options: input url .\Sample Video.v480.webm.&#xA;Applying option f (force format) with argument webm_dash_manifest.&#xA;Successfully parsed a group of options.&#xA;Opening an input file: .\Sample Video.v480.webm.&#xA;[webm_dash_manifest @ 000002bbcb41dc80] Opening &#x27;.\Sample Video.v480.webm&#x27; for reading&#xA;[file @ 000002bbcb41e300] Setting default whitelist &#x27;file,crypto,data&#x27;&#xA;st:0 removing common factor 1000000 from timebase&#xA;[webm_dash_manifest @ 000002bbcb41dc80] Error parsing Cues&#xA;[AVIOContext @ 000002bbcb41e5c0] Statistics: 102283 bytes read, 4 seeks&#xA;[in#0 @ 000002bbcb41dac0] Error opening input: Operation not permitted&#xA;Error opening input file .\Sample Video.v480.webm.&#xA;Error opening input files: Operation not permitted&#xA;

    &#xA;

    This is mediainfo for the offending input file, Sample Video.v480.webm :

    &#xA;

    General&#xA;Complete name                            : .\Sample Video.v480.webm&#xA;Format                                   : WebM&#xA;Format version                           : Version 2&#xA;File size                                : 628 MiB&#xA;Duration                                 : 1 h 34 min&#xA;Overall bit rate                         : 926 kb/s&#xA;Frame rate                               : 23.976 FPS&#xA;Encoded date                             : 2023-11-21 16:48:35 UTC&#xA;Writing application                      : HandBrake 1.7.0 2023111500&#xA;Writing library                          : Lavf60.16.100&#xA;&#xA;Video&#xA;ID                                       : 1&#xA;Format                                   : VP9&#xA;Format profile                           : 0&#xA;Codec ID                                 : V_VP9&#xA;Duration                                 : 1 h 34 min&#xA;Bit rate                                 : 882 kb/s&#xA;Width                                    : 720 pixels&#xA;Height                                   : 480 pixels&#xA;Display aspect ratio                     : 16:9&#xA;Frame rate mode                          : Constant&#xA;Frame rate                               : 23.976 (24000/1001) FPS&#xA;Color space                              : YUV&#xA;Chroma subsampling                       : 4:2:0&#xA;Bit depth                                : 8 bits&#xA;Bits/(Pixel*Frame)                       : 0.106&#xA;Stream size                              : 598 MiB (95%)&#xA;Default                                  : Yes&#xA;Forced                                   : No&#xA;Color range                              : Limited&#xA;Color primaries                          : BT.709&#xA;Transfer characteristics                 : BT.709&#xA;Matrix coefficients                      : BT.709&#xA;

    &#xA;

    I don't know if I need different command line options, or whether this might be an ffmpeg or Handbrake bug. It has taken many, many hours to generate these video files (VP9 is painfully slow to encode), so I hate to do a lot of this over again, especially doing it again encoding the video with ffmpeg instead of Handbrake, as Handbrake is (oddly enough, considering it uses ffmpeg under the hood) noticeably faster.

    &#xA;

    I have no idea what these "Cues" are that ffmpeg wants and can't parse, or how I would change them.

    &#xA;

  • Custom Segmentation Guide : How it Works & Segments to Test

    13 novembre 2023, par Erin — Analytics Tips, Uncategorized

    Struggling to get the insights you’re looking for with premade reports and audience segments in your analytics ?

    Custom segmentation can help you better understand your customers, app users or website visitors, but only if you know what you’re doing.

    You can derive false insights with the wrong segments, leading your marketing campaigns or product development in the wrong direction.

    In this article, we’ll break down what custom segmentation is, useful custom segments to consider, how new privacy laws affect segmentation options and how to create these segments in an analytics platform.

    What is custom segmentation ?

    Custom segmentation is when you divide your audience (customers, users, website visitors) into bespoke segments of your own design, not premade segments designed by the analytics or marketing platform provider.

    To do this, you single out “custom segment input” — data points you will use to pinpoint certain users. For example, it could be everyone who has visited a certain page on your site.

    Illustration of how custom segmentation works

    Segmentation isn’t just useful for targeting marketing campaigns and also for analysing your customer data. Creating segments is a great way to dive deeper into your data beyond surface-level insights.

    You can explore how various factors impact engagement, conversion rates, and customer lifetime value. These insights can help guide your higher-level strategy, not just campaigns.

    How custom segments can help your business

    As the global business world clamours to become more “data-driven,” even smaller companies collect all sorts of data on visitors, users, and customers.

    However, inexperienced organisations often become “data hoarders” without meaningful insights. They have in-house servers full of data or gigabytes stored by Google Analytics and other third-party providers.

    Illustration of a company that only collects data

    One way to leverage this data is with standard customer segmentation models. This can help you get insights into your most valuable customer groups and other standard segments.

    Custom segments, in turn, can help you dive deeper. They help you unlock insights into the “why” of certain behaviours. They can help you segment customers and your audience to figure out :

    • Why and how someone became a loyal customer
    • How high-order-value customers interact with your site before purchases
    • Which behaviours indicate audience members are likely to convert
    • Which traffic sources drive the most valuable customers

    This specific insight’s power led Gartner to predict that 70% of companies will shift focus from “big data” to “small and wide” by 2025. The lateral detail is what helps inform your marketing strategy. 

    You don’t need the same volume of data if you’re analysing and segmenting it effectively.

    Custom segment inputs : 6 data points you can use to create valuable custom segments 

    To help you get started, here are six useful data points you can use as a basis to create segments — AKA customer segment inputs :

    Diagram of the different possible custom segment inputs

    Visits to certain pages

    A basic data point that’s great for custom segments is visits to certain pages. Create segments for popular middle-of-funnel pages and compare their engagement and conversion rates. 

    For example, if a user visits a case study page, you can compare their likelihood to convert vs. other visitors.

    This is a type of behavioural segmentation, but it is the easiest custom segment to set up in terms of analysis and marketing efforts.

    Visitors who perform certain actions

    The other important type of behavioural segment is visitors or users who take certain actions. Think of things like downloading a file, clicking a link, playing a video or scrolling a certain amount.

    For instance, you can create a segment of all visitors who have downloaded a white paper. This can help you explore, for example, what drives someone to download a white paper. You can look at the typical user journey and make it easier for them to access the white paper — especially if your sales reps indicate many inbound leads mention it as a key driver of their interest.

    User devices

    Device-based segmentation lets you compare engagement and conversion rates on mobile, desktop and tablets. You can also get insights into their usage patterns and potential issues with certain mobile elements.

    Mobile device users segment in Matomo Analytics

    This is one aspect of technographic segmentation, where you segment based on users’ hardware or software. You can also create segments based on browser software or even specific versions.

    Loyal or high-value customers

    The best way to get more loyal or high-value customers is to explore their journey in more detail. These types of segments can help you better understand your ideal customers and how they act on your site.

    You can then use this insight to alter your campaigns or how you communicate with your target audience.

    For example, you might notice that high-value customers tend to come from a certain source. You can then focus your marketing efforts on this source to reach more of your ideal customers.

    Visitor or customer source

    You need to track the results if you’re investing in marketing (like an influencer campaign or a sponsored post) outside platforms with their own analytics.

    Screenshot of the free Matomo tracking URL builder

    Before you can create a reliable segment, you need to make sure that you use campaign tracking parameters to reliably track the source. You can use our free campaign tracking URL builder for that.

    Demographic segments — location (country, state) and more

    Web analytics tools, such as Matomo, use visitors’ IP addresses to pinpoint their location more accurately by cross-referencing with a database of known and estimated IP locations. In addition, these tools can detect a visitor’s location through the language settings in their browser. 

    This can help create segments based on location or language. By exploring these trends, you can identify patterns in behaviour, tailor your content to specific audiences, and adapt your overall strategy to better meet the preferences and needs of your diverse visitor base.

    How new privacy laws affect segmentation options

    Over the past few years, new legislation regarding privacy and customer data has been passed globally. The most notable privacy laws are the GDPR in the EU, the CCPA in California and the VCDPA in Virginia.

    Illustration of the impact of new privacy regulations on analytics

    For most companies, it can save a lot of work and future headaches to choose a GDPR-compliant web analytics solution not only streamlines operations, saving considerable effort and preventing future headaches, but also ensures peace of mind by guaranteeing the collection of compliant and accurate data. This approach allows companies to maintain compliance with privacy regulations while remaining firmly committed to a data-driven strategy.

    Create your very own custom segments in Matomo (while ensuring compliance and data accuracy)

    Crafting precise marketing messages and optimising ROI is crucial, but it becomes challenging without the right tools, especially when it comes to maintaining accurate data.

    That’s where Matomo comes in. Our privacy-friendly web analytics platform is GDPR-compliant and ensures accurate data, empowering you to effortlessly create and analyse precise custom segments.

    If you want to improve your marketing campaigns while remaining GDPR-compliant, start your 21-day free trial of Matomo. No credit card required.