Recherche avancée

Médias (0)

Mot : - Tags -/images

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (63)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Le plugin : Gestion de la mutualisation

    2 mars 2010, par

    Le plugin de Gestion de mutualisation permet de gérer les différents canaux de mediaspip depuis un site maître. Il a pour but de fournir une solution pure SPIP afin de remplacer cette ancienne solution.
    Installation basique
    On installe les fichiers de SPIP sur le serveur.
    On ajoute ensuite le plugin "mutualisation" à la racine du site comme décrit ici.
    On customise le fichier mes_options.php central comme on le souhaite. Voilà pour l’exemple celui de la plateforme mediaspip.net :
    < ?php (...)

  • Gestion de la ferme

    2 mars 2010, par

    La ferme est gérée dans son ensemble par des "super admins".
    Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
    Dans un premier temps il utilise le plugin "Gestion de mutualisation"

Sur d’autres sites (4544)

  • 10 Key Google Analytics Limitations You Should Be Aware Of

    9 mai 2022, par Erin

    Google Analytics (GA) is the biggest player in the web analytics space. But is it as “universal” as its brand name suggests ?

    Over the years users have pointed out a number of major Google Analytics limitations. Many of these are even more visible in Google Analytics 4. 

    Introduced in 2020, Google Analytics 4 (GA4) has been sceptically received. As the sunset date of 1st, July 2023 for the current version, Google Universal Analytics (UA), approaches, the dismay grows stronger.

    To the point where people are pleading with others to intervene : 

    GA4 Elon Musk Tweet
    Source : Chris Tweten via Twitter

    Main limitations of Google Analytics

    Google Analytics 4 is advertised as a more privacy-centred, comprehensive and “intelligent” web analytics platform. 

    According to Google, the newest version touts : 

    • Machine learning at its core provides better segmentation and fast-track access to granular insights 
    • Privacy-by-design controls, addressing restrictions on cookies and new regulatory demands 
    • More complete understanding of customer journeys across channels and devices 

    Some of these claims hold true. Others crumble upon a deeper investigation. Newly advertised Google Analytics capabilities such as ‘custom events’, ‘predictive insights’ and ‘privacy consent mode’ only have marginal improvements. 

    Complex setup, poor UI and lack of support with migration also leave many other users frustrated with GA4. 

    Let’s unpack all the current (and legacy) limitations of Google Analytics you should account for. 

    1. No Historical Data Imports 

    Google rushed users to migrate from Universal Analytics to Google Analytics 4. But they overlooked one important precondition — backwards compatibility. 

    You have no way to import data from Google Universal Analytics to Google Analytics 4. 

    Historical records are essential for analysing growth trends and creating benchmarks for new marketing campaigns. Effectively, you are cut short from past insights — and forced to start strategising from scratch. 

    At present, Google offers two feeble solutions : 

    • Run data collection in parallel and have separate reporting for GA4 and UA until the latter is shut down. Then your UA records are gone. 
    • For Ecommerce data, manually duplicate events from UA at a new GA4 property while trying to figure out the new event names and parameters. 

    Google’s new data collection model is the reason for migration difficulties. 

    In Google Analytics 4, all analytics hits types — page hits, social hits, app/screen view, etc. — are recorded as events. Respectively, the “‘event’ parameter in GA4 is different from one in Google Universal Analytics as the company explains : 

    GA4 vs Universal Analytics event parameters
    Source : Google

    This change makes migration tedious — and Google offers little assistance with proper events and custom dimensions set up. 

    2. Data Collection Limits 

    If you’ve wrapped your head around new GA4 events, congrats ! You did a great job, but the hassle isn’t over. 

    You still need to pay attention to new Google Analytics limits on data collection for event parameters and user properties. 

    GA4 Event limits
    Source : Google

    These apply to :

    • Automatically collected events
    • Enhanced measurement events
    • Recommended events 
    • Custom events 

    When it comes to custom events, GA4 also has a limit of 25 custom parameters per event. Even though it seems a lot, it may not be enough for bigger websites. 

    You can get higher limits by upgrading to Google Analytics 360, but the costs are steep. 

    3. Limited GDPR Compliance 

    Google Analytics has a complex history with European GDPR compliance

    A 2020 ruling by the Court of Justice of the European Union (CJEU) invalidated the Privacy Shield framework Google leaned upon. This framework allowed the company to regulate EU-US data transfers of sensitive user data. 

    But after this loophole was closed, Google faced a heavy series of privacy-related fines :

    • French data protection authority, CNIL, ruled that  “the transfers to the US of personal data collected through Google Analytics are illegal” — and proceeded to fine Google for a record-setting €150 million at the beginning of 2022. 
    • Austrian regulators also deemed Google in breach of GDPR requirements and also branded the analytics as illegal. 

    Other EU-member states might soon proceed with similar rulings. These, in turn, can directly affect Google Analytics users, whose businesses could face brand damage and regulatory fines for non-compliance. In fact, companies cannot select where the collected analytics data will be stored — on European servers or abroad — nor can they obtain this information from Google.

    Getting a web analytics platform that allows you to keep data on your own servers or select specific Cloud locations is a great alternative. 

    Google also has been lax with its cookie consent policy and doesn’t properly inform consumers about data collection, storage or subsequent usage. Google Analytics 4 addresses this issue to an extent. 

    By default, GA4 relies on first-party cookies, instead of third-party ones — which is a step forward. But the user privacy controls are hard to configure without losing most of the GA4 functionality. Implementing user consent mode to different types of data collection also requires a heavy setup. 

    4. Strong Reliance on Sampled Data 

    To compensate for ditching third-party cookies, GA4 more heavily leans on sampled data and machine learning to fill the gaps in reporting. 

    In GA4 sampling automatically applies when you :

    • Perform advanced analysis such as cohort analysis, exploration, segment overlap or funnel analysis with not enough data 
    • Have over 10,000,000 data rows and generate any type of non-default report 

    Google also notes that data sampling can occur at lower thresholds when you are trying to get granular insights. If there’s not enough data or because Google thinks it’s too complex to retrieve. 

    In their words :

    Source : Google

    Data sampling adds “guesswork” to your reports, meaning you can’t be 100% sure of data accuracy. The divergence from actual data depends on the size and quality of sampled data. Again, this isn’t something you can control. 

    Unlike Google Analytics 4, Matomo applies no data sampling. Your reports are always accurate and fully representative of actual user behaviours. 

    5. No Proper Data Anonymization 

    Data anonymization allows you to collect basic analytics about users — visits, clicks, page views — but without personally identifiable information (or PII) such as geo-location, assigns tracking ID or other cookie-based data. 

    This reduced your ability to :

    • Remarket 
    • Identify repeating visitors
    • Do advanced conversion attribution 

    But you still get basic data from users who ignored or declined consent to data collection. 

    By default, Google Analytics 4 anonymizes all user IP addresses — an upgrade from UA. However, it still assigned a unique user ID to each user. These count as personal data under GDPR. 

    For comparison, Matomo provides more advanced privacy controls. You can anonymize :

    • Previously tracked raw data 
    • Visitor IP addresses
    • Geo-location information
    • User IDs 

    This can ensure compliance, especially if you operate in a sensitive industry — and delight privacy-mindful users ! 

    6. No Roll-Up Reporting

    Getting a bird’s-eye view of all your data is helpful when you need hotkey access to main sites — global traffic volume, user count or percentage of returning visitors.

    With Roll-Up Reporting, you can see global-performance metrics for multiple localised properties (.co.nz, .co.uk, .com, etc,) in one screen. Then zoom in on specific localised sites when you need to. 

    7. Report Processing Latency 

    The average data processing latency is 24-48 hours with Google Analytics. 

    Accounts with over 200,000 daily sessions get data refreshes only once a day. So you won’t be seeing the latest data on core metrics. This can be a bummer during one-day promo events like Black Friday or Cyber Monday when real-time information can prove to be game-changing ! 

    Matomo processes data with lower latency even for high-traffic websites. Currently, we have 6-24 hour latency for cloud deployments. On-premises web analytics can be refreshed even faster — within an hour or instantly, depending on the traffic volumes. 

    8. No Native Conversion Optimisation Features

    Google Analytics users have to use third-party tools to get deeper insights like how people are interacting with your webpage or call-to-action.

    You can use the free Google Optimize tool, but it comes with limits : 

    • No segmentation is available 
    • Only 10 simultaneous running experiments allowed 

    There isn’t a native integration between Google Optimize and Google Analytics 4. Instead, you have to manually link an Optimize Container to an analytics account. Also, you can’t select experiment dimensions in Google Analytics reports.

    What’s more, Google Optimize is a basic CRO tool, best suited for split testing (A/B testing) of copy, visuals, URLs and page layouts. If you want to get more advanced data, you need to pay for extra tools. 

    Matomo comes with a native set of built-in conversion optimization features : 

    • Heatmaps 
    • User session recording 
    • Sales funnel analysis 
    • A/B testing 
    • Form submission analytics 
    A/B test hypothesis testing on Matomo
    A/B test hypothesis testing on Matomo

    9. Deprecated Annotations

    Annotations come in handy when you need to provide extra context to other team members. For example, point out unusual traffic spikes or highlight a leak in the sales funnel. 

    This feature was available in Universal Analytics but is now gone in Google Analytics 4. But you can still quickly capture, comment and share knowledge with your team in Matomo. 

    You can add annotations to any graph that shows statistics over time including visitor reports, funnel analysis charts or running A/B tests. 

    10. No White Label Option 

    This might be a minor limitation of Google Analytics, but a tangible one for agency owners. 

    Offering an on-brand, embedded web analytics platform can elevate your customer experience. But white label analytics were never a thing with Google Analytics, unlike Matomo. 

    Wrap Up 

    Google set a high bar for web analytics. But Google Analytics inherent limitations around privacy, reporting and deployment options prompt more users to consider Google Analytics alternatives, like Matomo. 

    With Matomo, you can easily migrate your historical data records and store customer data locally or in a designated cloud location. We operate by a 100% unsampled data principle and provide an array of privacy controls for advanced compliance. 

    Start your 21-day free trial (no credit card required) to see how Matomo compares to Google Analytics ! 

  • Parsing The Clue Chronicles

    30 décembre 2018, par Multimedia Mike — Game Hacking

    A long time ago, I procured a 1999 game called Clue Chronicles : Fatal Illusion, based on the classic board game Clue, a.k.a. Cluedo. At the time, I was big into collecting old, unloved PC games so that I could research obscure multimedia formats.



    Surveying the 3 CD-ROMs contained in the box packaging revealed only Smacker (SMK) videos for full motion video which was nothing new to me or the multimedia hacking community at the time. Studying the mix of data formats present on the discs, I found a selection of straightforward formats such as WAV for audio and BMP for still images. I generally find myself more fascinated by how computer games are constructed rather than by playing them, and this mix of files has always triggered a strong “I could implement a new engine for this !” feeling in me, perhaps as part of the ScummVM project which already provides the core infrastructure for reimplementing engines for 2D adventure games.

    Tying all of the assets together is a custom high-level programming language. I have touched on this before in a blog post over a decade ago. The scripts are in a series of files bearing the extension .ini (usually reserved for configuration scripts, but we’ll let that slide). A representative sample of such a script can be found here :

    clue-chronicles-scarlet-1.txt

    What Is This Language ?
    At the time I first analyzed this language, I was still primarily a C/C++-minded programmer, with a decent amount of Perl experience as a high level language, and had just started to explore Python. I assessed this language to be “mildly object oriented with C++-type comments (‘//’) and reliant upon a number of implicit library functions”. Other people saw other properties. When I look at it nowadays, it reminds me a bit more of JavaScript than C++. I think it’s sort of a Rorschach test for programming languages.

    Strangely, I sort of had this fear that I would put a lot of effort into figuring out how to parse out the language only for someone to come along and point out that it’s a well-known yet academic language that already has a great deal of supporting code and libraries available as open source. Google for “spanish dolphins far side comic” for an illustration of the feeling this would leave me with.

    It doesn’t matter in the end. Even if such libraries exist, how easy would they be to integrate into something like ScummVM ? Time to focus on a workable approach to understanding and processing the format.

    Problem Scope
    So I set about to see if I can write a program to parse the language seen in these INI files. Some questions :

    1. How large is the corpus of data that I need to be sure to support ?
    2. What parsing approach should I take ?
    3. What is the exact language format ?
    4. Other hidden challenges ?

    To figure out how large the data corpus is, I counted all of the INI files on all of the discs. There are 138 unique INI files between the 3 discs. However, there are 146 unique INI files after installation. This leads to a hidden challenge described a bit later.

    What parsing approach should I take ? I worried a bit too much that I might not be doing this the “right” way. I’m trying to ignore doubts like this, like how “SQL Shame” blocked me on a task for a little while a few years ago as I concerned myself that I might not be using the purest, most elegant approach to the problem. I know I covered language parsing a lot time ago in university computer science education and there is a lot of academic literature to the matter. But sometimes, you just have to charge in and experiment and prototype and see what falls out. In doing so, I expect to have a better understanding of the problems that need to solved and the right questions to ask, not unlike that time that I wrote a continuous integration system from scratch because I didn’t actually know that “continuous integration” was the keyword I needed.

    Next, what is the exact language format ? I realized that parsing the language isn’t the first and foremost problem here– I need to know exactly what the language is. I need to know what the grammar are keywords are. In essence, I need to reverse engineer the language before I write a proper parser for it. I guess that fits in nicely with the historical aim of this blog (reverse engineering).

    Now, about the hidden challenges– I mentioned that there are 8 more INI files after the game installs itself. Okay, so what’s the big deal ? For some reason, all of the INI files are in plaintext on the CD-ROM but get compressed (apparently, according to file size ratios) when installed to the hard drive. This includes those 8 extra INI files. I thought to look inside the CAB installation archive file on the CD-ROM and the files were there… but all in compressed form. I suspect that one of the files forms the “root” of the program and is the launching point for the game.

    Parsing Approach
    I took a stab at parsing an INI file. My approach was to first perform lexical analysis on the file and create a list of 4 types : symbols, numbers, strings, and language elements ([]{}()=., :). Apparently, this is the kind of thing that Lex/Flex are good at. This prototyping tool is written in Python, but when I port this to ScummVM, it might be useful to call upon the services of Lex/Flex, or another lexical analyzer, for there are many. I have a feeling it will be easier to use better tools when I understand the full structure of the language based on the data available.

    The purpose of this tool is to explore all the possibilities of the existing corpus of INI files. To that end, I ran all 138 of the plaintext files through it, collected all of the symbols, and massaged the results, assuming that the symbols that occurred most frequently are probably core language features. These are all the symbols which occur more than 1000 times among all the scripts :

       6248 false
       5734 looping
       4390 scripts
       3877 layer
       3423 sequentialscript
       3408 setactive
       3360 file
       3257 thescreen
       3239 true
       3008 autoplay
       2914 offset
       2599 transparent
       2441 text
       2361 caption
       2276 add
       2205 ge
       2197 smackanimation
       2196 graphicscript
       2196 graphic
       1977 setstate
       1642 state
       1611 skippable
       1576 desc
       1413 delayscript
       1298 script
       1267 seconds
       1019 rect
    

    About That Compression
    I have sorted out at least these few details of the compression :

    bytes 0-3    "COMP" (a pretty strong sign that this is, in fact, compressed data)
    bytes 4-11   unknown
    bytes 12-15  size of uncompressed data
    bytes 16-19  size of compressed data (filesize - 20)
    bytes 20-    compressed payload
    

    The compression ratios are on the same order of gzip. I was hoping that it was stock zlib data. However, I have been unable to prove this. I wrote a Python script that scrubbed through the first 100 bytes of payload data and tried to get Python’s zlib.decompress to initialize– no luck. It’s frustrating to know that I’ll have to reverse engineer a compression algorithm that deals with just 8 total text files if I want to see this effort through to fruition.

    Update, January 15, 2019
    Some folks expressed interest in trying to sort out the details of the compression format. So I have posted a followup in which I post some samples and go into deeper details about things I have tried :

    Reverse Engineering Clue Chronicles Compression

    The post Parsing The Clue Chronicles first appeared on Breaking Eggs And Making Omelettes.

  • Revision 32737 : habillage par defaut de Zpip utilise les conventions documentees

    8 novembre 2009, par cedric@… — Log

    habillage par defaut de Zpip utilise les conventions documentees