Recherche avancée

Médias (3)

Mot : - Tags -/Valkaama

Autres articles (75)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (4497)

  • Hung out to dry

    31 mai 2013, par Mans — Law and liberty

    Outrage was the general reaction when Google recently announced their dropping of XMPP server-to-server federation from Hangouts, as the search giant’s revamped instant messaging platform is henceforth to be known. This outrage is, however, largely unjustified ; Google’s decision is merely a rational response to issues of a more fundamental nature. To see why, we need to step back and look at the broader instant messaging landscape.

    A brief history of IM

    The term instant messaging (IM) gained popularity in the mid-1990s along with the rise of chat clients such as ICQ, AOL Instant Messenger, and later MSN Messenger. These all had one thing in common : they were closed systems. Although global in the sense of allowing access from anywhere on the Internet, communication was possible only within each network, and only using the officially sanctioned client software. Contrast this with email, where users are free to choose any service provider as well as client software, inter-server communication over open protocols delivering messages to their proper destinations.

    The email picture has, however, not always been so rosy. During the 1970s and 80s a multitude of incompatible email systems (e.g. UUCP and X.400) were in more or less widespread use on various networks. As these networks gave way to the ARPANET/Internet, so did their mail systems to the SMTP email we all use today. A similar consolidation has yet to occur in the area of instant messaging.

    Over the years, a few efforts towards a cross-domain instant messaging have been undertaken. One early example is the Zephyr system created as part of Project Athena at MIT in the late 1980s. While it never saw significant uptake, it is still in use at a few universities. A more successful story is that of XMPP. Conceived under the name Jabber in the late 1990s, XMPP is an open standard specified in a set of IETF RFCs. In addition to being open, a distinguishing feature of XMPP compared to other contemporary IM systems is its decentralised nature, server-to-server connections allowing communication between users with accounts on different systems. Just like email.

    The social network

    A more recent emergence on the Internet is the social network. Although not the first of its kind, Facebook was the first to achieve its level of penetration, both geographically and across social groups. A range of messaging options, including email-style as well as instant messaging (chat), are available, all within the same web interface. What it does not allow is communication outside the Facebook network. Other social networks operate in the same spirit.

    The popularity of social networks, to the extent that they for many constitute the primary means of communication, has in a sense brought back fragmented networks of the 1980s. Even though they share infrastructure, up to and including the browser application, the social networks create walled-off regions of the Internet between which little or no exchange is possible.

    The house that Google built

    In 2005, Google launched Talk, an XMPP-based instant messaging service allowing users to connect using either Google’s official client application or any third-party XMPP client. Soon after, server-to-server federation was activated, enabling anyone with a Google account to exchange instant messages with users of any other federated XMPP service. An in-browser chat interface was also added to Gmail.

    It was arguably only with the 2011 introduction of Google+ that Google, despite its previous endeavours with Orkut and Buzz, had a viable contender in the social networking space. Since its inception, Google+ has gone through a number of changes where features have been added or reworked. Instant messaging within Google+ was until recently available only in mobile clients. On the desktop, the sole messaging option was Hangouts which, although featuring text chat, cannot be considered instant messaging in the usual sense.

    With a sprawling collection of messaging systems (Talk, Google+ Messenger, Hangouts), some action to consolidate them was a logical step. What we got was a unification under the Hangouts name. A redesigned Google+ now sports in-browser instant messaging similar the the Talk interface already present in Gmail. At the same time, the standalone desktop Talk client is discontinued, as is the Messenger feature in mobile Google+. All together, the changes make for a much less confusing user experience.

    The sky is falling down

    Along with the changes to the messaging platform, one announcement stoked anger on the Internet : Google’s intent to discontinue XMPP federation (as of this writing, it is still operational). Google, the (self-described) champions of openness on the Internet were seen to be closing their doors to the outside world. The effects of the change are, however, not quite so earth-shattering. Of the other major messaging networks to offer XMPP at all (Facebook, Skype, and the defunct Microsoft Messenger), none support federation ; a Google user has never been able to chat with a Facebook user.

    XMPP federation appears to be in use mainly by non-profit organisations or individuals running their own servers. The number of users on these systems is hard to assess, though it seems fair to assume it is dwarfed by the hundreds of millions using Google or Facebook. As such, the overall impact of cutting off communication with the federated servers is relatively minor, albeit annoying for those affected.

    A fragmented world

    Rather than chastising Google for making a low-impact, presumably founded, business decision, we should be asking ourselves why instant messaging is still so fragmented in the first place, whereas email is not. The answer can be found by examining the nature of entities providing these services.

    Ever since the commercialisation of the Internet started in the 1990s, email has been largely seen as being part of the Internet. Access to email was a major selling point for Internet service providers ; indeed, many still use the email facilities of their ISP. Instant messaging, by contrast, has never come as part of the basic offering, rather being a third-party service running on top of the Internet.

    Users wishing to engage in instant messaging have always had to seek out and sign up with a provider of such a service. As the IM networks were isolated, most would choose whichever service their friends were already using, and a small number of networks, each with a sustainable number of users, came to dominate. In the early days, dedicated IM services such as ICQ were popular. Today, social networks have taken their place with Facebook currently in the dominant position. With the new Hangouts, Google offers its users the service they want in the way they have come to expect.

    Follow the money

    We now have all the pieces necessary to see why inter-domain instant messaging has never taken off, and the answer is simple : the major players have no commercial incentive to open access to their IM networks. In fact, they have good reason to keep the networks closed. Ensuring that a person leaving the network loses contact with his or her friends, increases user retention by raising the cost of switching to another service. Monetising users is also better facilitated if they are forced to remain on, say, Facebook’s web pages while using its services rather than accessing them indirectly, perhaps even through a competing (Google, say) frontend. The users do not generally care much, since all their friends are already on the same network as themselves.

    While Google Talk was a standalone service, only loosely coupled to other Google products, these aspects were of lesser importance. After all, Google still had access to all the messages passing through the system and could analyse them for advert targeting purposes. Now that messaging is an integrated part of Google+, and thus serves as a direct competitor to the likes of Facebook, the situation has changed. All the reasons for Facebook not to open its network now apply equally to Google as well.

  • Things I Have Learned About Emscripten

    1er septembre 2015, par Multimedia Mike — Cirrus Retro

    3 years ago, I released my Game Music Appreciation project, a website with a ludicrously uninspired title which allowed users a relatively frictionless method to experience a range of specialized music files related to old video games. However, the site required use of a special Chrome plugin. Ever since that initial release, my #1 most requested feature has been for a pure JavaScript version of the music player.

    “Impossible !” I exclaimed. “There’s no way JS could ever run fast enough to run these CPU emulators and audio synthesizers in real time, and allow for the visualization that I demand !” Well, I’m pleased to report that I have proved me wrong. I recently quietly launched a new site with what I hope is a catchier title, meant to evoke a cloud-based retro-music-as-a-service product : Cirrus Retro. Right now, it’s basically the same as the old site, but without the wonky Chrome-specific technology.

    Along the way, I’ve learned a few things about using Emscripten that I thought might be useful to share with other people who wish to embark on a similar journey. This is geared more towards someone who has a stronger low-level background (such as C/C++) vs. high-level (like JavaScript).

    General Goals
    Do you want to cross-compile an entire desktop application, one that relies on an extensive GUI toolkit ? That might be difficult (though I believe there is a path for porting qt code directly with Emscripten). Your better wager might be to abstract out the core logic and processes of the program and then create a new web UI to access them.

    Do you want to compile a game that basically just paints stuff to a 2D canvas ? You’re in luck ! Emscripten has a porting path for SDL. Make a version of your C/C++ software that targets SDL (generally not a tall order) and then compile that with Emscripten.

    Do you just want to cross-compile some functionality that lives in a library ? That’s what I’ve done with the Cirrus Retro project. For this, plan to compile the library into a JS file that exports some public functions that other, higher-level, native JS (i.e., JS written by a human and not a computer) will invoke.

    Memory Levels
    When porting C/C++ software to JavaScript using Emscripten, you have to think on 2 different levels. Or perhaps you need to force JavaScript into a low level C lens, especially if you want to write native JS code that will interact with Emscripten-compiled code. This often means somehow allocating chunks of memory via JS and passing them to the Emscripten-compiled functions. And you wouldn’t believe the type of gymnastics you need to execute to get native JS and Emscripten-compiled JS to cooperate.

    “Emscripten : Pointers and Pointers” is the best (and, really, ONLY) explanation I could find for understanding the basic mechanics of this process, at least when I started this journey. However, there’s a mistake in the explanation that left me confused for a little while, and I’m at a loss to contact the author (doesn’t anyone post a simple email address anymore ?).

    Per the best of my understanding, Emscripten allocates a large JS array and calls that the memory space that the compiled C/C++ code is allowed to operate in. A pointer in C/C++ code will just be an index into that mighty array. Really, that’s not too far off from how a low-level program process is supposed to view memory– as a flat array.

    Eventually, I just learned to cargo-cult my way through the memory allocation process. Here’s the JS code for allocating an Emscripten-compatible byte buffer, taken from my test harness (more on that later) :

    var musicBuffer = fs.readFileSync(testSpec[’filename’]) ;
    var musicBufferBytes = new Uint8Array(musicBuffer) ;
    var bytesMalloc = player._malloc(musicBufferBytes.length) ;
    var bytes = new Uint8Array(player.HEAPU8.buffer, bytesMalloc, musicBufferBytes.length) ;
    bytes.set(new Uint8Array(musicBufferBytes.buffer)) ;
    

    So, read the array of bytes from some input source, create a Uint8Array from the bytes, use the Emscripten _malloc() function to allocate enough bytes from the Emscripten memory array for the input bytes, then create a new array… then copy the bytes…

    You know what ? It’s late and I can’t remember how it works exactly, but it does. It has been a few months since I touched that code (been fighting with front-end website tech since then). You write that memory allocation code enough times and it begins to make sense, and then you hope you don’t have to write it too many more times.

    Multithreading
    You can’t port multithreaded code to JS via Emscripten. JavaScript has no notion of threads ! If you don’t understand the computer science behind this limitation, a more thorough explanation is beyond the scope of this post. But trust me, I’ve thought about it a lot. In fact, the official Emscripten literature states that you should be able to port most any C/C++ code as long as 1) none of the code is proprietary (i.e., all the raw source is available) ; and 2) there are no threads.

    Yes, I read about the experimental pthreads support added to Emscripten recently. Don’t get too excited ; that won’t be ready and widespread for a long time to come as it relies on a new browser API. In the meantime, figure out how to make your multithreaded C/C++ code run in a single thread if you want it to run in a browser.

    Printing Facility
    Eventually, getting software to work boils down to debugging, and the most primitive tool in many a programmer’s toolbox is the humble print statement. A print statement allows you to inspect a piece of a program’s state at key junctures. Eventually, when you try to cross-compile C/C++ code to JS using Emscripten, something is not going to work correctly in the generated JS “object code” and you need to understand what. You’ll be pleading for a method of just inspecting one variable deep in the original C/C++ code.

    I came up with this simple printf-workalike called emprintf() :

    #ifndef EMPRINTF_H
    #define EMPRINTF_H
    

    #include <stdio .h>
    #include <stdarg .h>
    #include <emscripten .h>

    #define MAX_MSG_LEN 1000

    /* NOTE : Don’t pass format strings that contain single quote (’) or newline
    * characters. */
    static void emprintf(const char *format, ...)

    char msg[MAX_MSG_LEN] ;
    char consoleMsg[MAX_MSG_LEN + 16] ;
    va_list args ;

    /* create the string */
    va_start(args, format) ;
    vsnprintf(msg, MAX_MSG_LEN, format, args) ;
    va_end(args) ;

    /* wrap the string in a console.log(’’) statement */
    snprintf(consoleMsg, MAX_MSG_LEN + 16, "console.log(’%s’)", msg) ;

    /* send the final string to the JavaScript console */
    emscripten_run_script(consoleMsg) ;

    #endif /* EMPRINTF_H */

    Put it in a file called “emprint.h”. Include it into any C/C++ file where you need debugging visibility, use emprintf() as a replacement for printf() and the output will magically show up on the browser’s JavaScript debug console. Heed the comments and don’t put any single quotes or newlines in strings, and keep it under 1000 characters. I didn’t say it was perfect, but it has helped me a lot in my Emscripten adventures.

    Optimization Levels
    Remember to turn on optimization when compiling. I have empirically found that optimizing for size (-Os) leads to the best performance all around, in addition to having the smallest size. Just be sure to specify some optimization level. If you don’t, the default is -O0 which offers horrible performance when running in JS.

    Static Compression For HTTP Delivery
    JavaScript code compresses pretty efficiently, even after it has been optimized for size using -Os. I routinely see compression ratios between 3.5:1 and 5:1 using gzip.

    Web servers in this day and age are supposed to be smart enough to detect when a requesting web browser can accept gzip-compressed data and do the compression on the fly. They’re even supposed to be smart enough to cache compressed output so the same content is not recompressed for each request. I would have to set up a series of tests to establish whether either of the foregoing assertions are correct and I can’t be bothered. Instead, I took it into my own hands. The trick is to pre-compress the JS files and then instruct the webserver to serve these files with a ‘Content-Type’ of ‘application/javascript’ and a ‘Content-Encoding’ of ‘gzip’.

    1. Compress your large Emscripten-build JS files with ‘gzip’ : ‘gzip compiled-code.js’
    2. Rename them from extension .js.gz to .jsgz
    3. Tell the webserver to deliver .jsgz files with the correct Content-Type and Content-Encoding headers

    To do that last step with Apache, specify these lines :

    AddType application/javascript jsgz
    AddEncoding gzip jsgz
    

    They belong in either a directory’s .htaccess file or in the sitewide configuration (/etc/apache2/mods-available/mime.conf works on my setup).

    Build System and Build Time Optimization
    Oh goodie, build systems ! I had a very specific manner in which I wanted to build my JS modules using Emscripten. Can I possibly coerce any of the many popular build systems to do this ? It has been a few months since I worked on this problem specifically but I seem to recall that the build systems I tried to used would freak out at the prospect of compiling stuff to a final binary target of .js.

    I had high hopes for Bazel, which Google released while I was developing Cirrus Retro. Surely, this is software that has been battle-tested in the harshest conditions of one of the most prominent software-developing companies in the world, needing to take into account the most bizarre corner cases and still build efficiently and correctly every time. And I have little doubt that it fulfills the order. Similarly, I’m confident that Google also has a team of no fewer than 100 or so people dedicated to developing and supporting the project within the organization. When you only have, at best, 1-2 hours per night to work on projects like this, you prefer not to fight with such cutting edge technology and after losing 2 or 3 nights trying to make a go of Bazel, I eventually put it aside.

    I also tried to use Autotools. It failed horribly for me, mostly for my own carelessness and lack of early-project source control.

    After that, it was strictly vanilla makefiles with no real dependency management. But you know what helps in these cases ? ccache ! Or at least, it would if it didn’t fail with Emscripten.

    Quick tip : ccache has trouble with LLVM unless you set the CCACHE_CPP2 environment variable (e.g. : “export CCACHE_CPP2=1”). I don’t remember the specifics, but it magically fixes things. Then, the lazy build process becomes “make clean && make”.

    Testing
    If you have never used Node.js, testing Emscripten-compiled JS code might be a good opportunity to start. I was able to use Node.js to great effect for testing the individually-compiled music player modules, wiring up a series of invocations using Python for a broader test suite (wouldn’t want to go too deep down the JS rabbit hole, after all).

    Be advised that Node.js doesn’t enjoy the same kind of JIT optimizations that the browser engines leverage. Thus, in the case of time critical code like, say, an audio synthesis library, the code might not run in real time. But as long as it produces the correct bitwise waveform, that’s good enough for continuous integration.

    Also, if you have largely been a low-level programmer for your whole career and are generally unfamiliar with the world of single-threaded, event-driven, callback-oriented programming, you might be in for a bit of a shock. When I wanted to learn how to read the contents of a file in Node.js, this is the first tutorial I found on the matter. I thought the code presented was a parody of bad coding style :

    var fs = require("fs") ;
    var fileName = "foo.txt" ;
    

    fs.exists(fileName, function(exists)
    if (exists)
    fs.stat(fileName, function(error, stats)
    fs.open(fileName, "r", function(error, fd)
    var buffer = new Buffer(stats.size) ;

    fs.read(fd, buffer, 0, buffer.length, null, function(error, bytesRead, buffer)
    var data = buffer.toString("utf8", 0, buffer.length) ;

    console.log(data) ;
    fs.close(fd) ;
    ) ;
    ) ;
    ) ;
    ) ;

    Apparently, this kind of thing doesn’t raise an eyebrow in the JS world.

    Now, I understand and respect the JS programming model. But this was seriously frustrating when I first encountered it because a simple script like the one I was trying to write just has an ordered list of tasks to complete. When it asks for bytes from a file, it really has nothing better to do than to wait for the answer.

    Thankfully, it turns out that Node’s fs module includes synchronous versions of the various file access functions. So it’s all good.

    Conclusion
    I’m sure I missed or underexplained some things. But if other brave souls are interested in dipping their toes in the waters of Emscripten, I hope these tips will come in handy.

  • Google Analytics 4 (GA4) vs Universal Analytics (UA)

    24 janvier 2022, par Erin — Analytics Tips

    March 2022 Update : It’s official ! Google announced that Universal Analytics will no longer process any new data as of 1 July 2023. Google is now pushing Universal Analytics users to switch to the latest version of GA – Google Analytics 4. 

    Currently, Google Analytics 4 is unable to accept historical data from Universal Analytics. Users need to take action before July 2022, to ensure they have 12 months of data built up before the sunset of Universal Analytics

    So how do Universal Analytics and Google Analytics 4 compare ? And what alternative options do you have ? Let’s dive in. 

    In this blog, we’ll cover :

    What is Google Analytics 4 ? 

    In October 2020, Google launched Google Analytics 4, a completely redesigned analytics platform. This follows on from the previous version known as Universal Analytics (or UA).

    Amongst its touted benefits, GA4 promises a completely new way to model data and even the ability to predict future revenue. 

    However, the reception of GA4 has been largely negative. In fact, some users from the digital marketing community have said that GA4 is awful, unusable and so bad it can bring you to tears.

    Gill Andrews via Twitter

    Google Analytics 4 vs Universal Analytics

    There are some pretty big differences between Google Analytics 4 and Universal Analytics but for this blog, we’ll cover the top three.

    1. Redesigned user interface (UI)

    GA4 features a completely redesigned UI to Universal Analytics’ popular interface. This dramatic change has left many users in confusion and fuelled some users to declare that “most of the time you are going round in circles to find what you’re looking for.”

    Google Analytics 4 missing features
    Mike Huggard via Twitter

    2. Event-based tracking

    Google Analytics 4 also brings with it a new data model which is purely event-based. This event-based model moves away from the typical “pageview” metric that underpins Universal Analytics.

    3. Machine learning insights

    Google Analytics 4 promises to “predict the future behavior of your users” with their machine-learning-powered predictive metrics. This feature can “use shared aggregated and anonymous data to improve model quality”. Sounds powerful, right ?

    Unfortunately, it only works if at least 1,000 returning users triggered the relevant predictive condition over a seven-day period. Also, if the model isn’t sustained over a “period of time” then it won’t work. And according to Google, if “the model quality for your property falls below the minimum threshold, then Analytics will stop updating the corresponding predictions”.

    This means GA4’s machine learning insights probably won’t work for the majority of analytics users.

    Ultimately, GA4 is just not ready to replace Google’s Universal Analytics for most users. There are too many missing features.

    What’s missing in Google Analytics 4 ?

    Quite a lot. Even though it offers a completely new approach to analytics, there are a lot of key features and functions missing in GA4.

    Behavior Flow

    The Behavior Flow report in Universal Analytics helps to visualise the path users take from one page or Event to the next. It’s extremely useful when you’re looking for quick and clear insight. But it no longer exists in Google Analytics 4, and instead, two new overcomplicated reports have been introduced to replace it – funnel exploration report and path exploration report.

    The decision to remove this critical report will leave many users feeling disappointed and frustrated. 

    Limitations on custom dimensions

    You can create custom dimensions in Google Analytics 4 to capture advanced information. For example, if a user reads a blog post you can supplement that data with custom dimensions like author name or blog post length. But, you can only use up to 50, and for some that will make functionality like this almost pointless.

    Machine learning (ML) limitations

    Google Analytics 4 promises powerful ML insights to predict the likelihood of users converting based on their behaviors. The problem ? You need 1,000 returning users in one week. For most small-medium businesses this just isn’t possible.

    And if you do get this level of traffic in a week, there’s another hurdle. According to Google, if “the model quality for your property falls below the minimum threshold, then GA will stop updating the corresponding predictions.” To add insult to injury Google suggests that this might make all ML insights unavailable. But they can’t say for certain… 

    Views

    One cornerstone of Universal Analytics is the ability to configure views. Views allow you to set certain analytics environments for testing or cleaning up data by filtering out internal traffic, for example. 

    Views are great for quickly and easily filtering data. Preset views that contain just the information you want to see are the ideal analytics setup for smaller businesses, casual users, and do-it-yourself marketing departments.

    Via Reddit

    There are a few workarounds but they’re “messy [,] annoying and clunky,” says a disenfranchised Redditor.

    Another helpful Reddit user stumbled upon an unhelpful statement from Google. Google says that they “do not offer [the views] feature in Google Analytics 4 but are planning similar functionality in the future.” There’s no specific date yet though.

    Bounce rate

    Those that rely on bounce rate to understand their site’s performance will be disappointed to find out that bounce rate is also not available in GA4. Instead, Google is pushing a new metric known as “Engagement Rate”. With this metric, Google now uses their own formula to establish if a visitor is engaged with a site.

    Lack of integration

    Currently, GA4 isn’t ready to integrate with many core digital marketing tools and doesn’t accept non-Google data imports. This makes it difficult for users to analyse ROI and ROAS for campaigns measured in other tools. 

    Content Grouping

    Yet another key feature that Google has done away with is Content Grouping. However, as with some of the other missing features in GA4, there is a workaround, but it’s not simple for casual users to implement. In order to keep using Content Grouping, you’ll need to create event-scoped custom dimensions.

    Annotations 

    A key feature of Universal Analytics is the ability to add custom Annotations in views. Annotations are useful for marking dates that site changes were made for analysis in the future. However, Google has removed the Annotations feature and offered no alternative or workaround.

    Historical data imports are not available

    The new approach to data modelling in GA4 adds new functionality that UA can’t match. However, it also means that you can’t import historical UA data into GA4. 

    Google’s suggestion for this one ? Keep running UA with GA4 and duplicate events for your GA4 property. Now you will have two different implementations running alongside each other and doing slightly different things. Which doesn’t sound like a particularly streamlined solution, and adds another level of complexity.

    Should you switch to Google Analytics 4 ?

    So the burning question is, should you switch from Universal Analytics to Google Analytics 4 ? It really depends on whether you have the available resources and if you believe this tool is still right for your organisation. At the time of writing, GA4 is not ready for day-to-day use in most organisations.

    If you’re a casual user or someone looking for quick, clear insights then you will likely struggle with the switch to GA4. It appears that the new Google Analytics 4 has been designed for enterprise-scale businesses with large internal teams of analysts.

    Google Analytics 4 UX changes
    Micah Fisher-Kirshner via Twitter

    Unfortunately, for most casual users, business owners and do-it-yourself marketers there are complex workarounds and time-consuming implementations to handle. Ultimately, it’s up to you to decide if the effort to migrate and relearn GA is worth it.

    Right now is the best time to draw the line and make a decision to either switch to GA4 or look for a better alternative to Google Analytics.

    Google Analytics alternative

    Matomo is one of the best Google Analytics alternatives offering an easy to use design with enhanced insights on our Cloud, On-Premise and on Matomo for WordPress solutions. 

    Google Analytics 4 Switch to Matomo
    Mark Samber via Twitter

    Matomo is an open-source analytics solution that provides a comprehensive, user-friendly and compliance-focused alternative to both Google Analytics 4 and Universal Analytics.

    The key benefits of using Matomo include :

    Plus, unlike GA4, Matomo will accept your historical data from UA so you don’t have to start all over again. Check out our 7 step guide to migrating from Google Analytics to find out how.

    Getting started with Matomo is easy. Check out our live demo and start your free 21-day trial. No credit card required.

    In addition to the limitations and complexities of GA4, there are many other significant drawbacks to using Google Analytics.

    Google’s data ethics are a growing concern of many and it is often discussed in the mainstream media. In addition, GA is not GDPR compliant by default and has resulted in 200k+ data protection cases against websites using GA.

    What’s more, the data that Google Analytics actually provides its end-users is extrapolated from samples. GA’s data sampling model means that once you’ve collected a certain amount of data Google Analytics will make educated guesses rather than use up its server space collecting your actual data. 

    The reasons to switch from Google Analytics are rising each day. 

    Wrap up

    The now required update to GA4 will add new layers of complexity, which will leave many casual web analytics users and marketers wondering if there’s a better way. Luckily there is. Get clear insights quickly and easily with Matomo – start your 21-day free trial now.