Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (38)

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Utilisation et configuration du script

    19 janvier 2011, par

    Informations spécifiques à la distribution Debian
    Si vous utilisez cette distribution, vous devrez activer les dépôts "debian-multimedia" comme expliqué ici :
    Depuis la version 0.3.1 du script, le dépôt peut être automatiquement activé à la suite d’une question.
    Récupération du script
    Le script d’installation peut être récupéré de deux manières différentes.
    Via svn en utilisant la commande pour récupérer le code source à jour :
    svn co (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

Sur d’autres sites (6257)

  • Ode to the Gravis Ultrasound

    1er août 2011, par Multimedia Mike — General

    WARNING : This post is a bunch of nostalgia. Feel free to follow along if you recall the DOS days of the early-mid 1990s.

    I finally let go of my Gravis Ultrasound MAX sound card a little while ago. It felt like the end of an era for me, even though I had scarcely used the card in recent memory.



    The Beginning
    What is the Gravis Ultrasound ? Only the finest PC sound card from the classic DOS days. Back in the day (very early 1990s), most consumer PC sound cards were Yamaha OPL FM synthesizers paired with a basic digital to analog converter (DAC). Gravis, a company known for game controllers, dared to break with the dominant paradigm of Sound Blaster clones and create a sound card that had 32 digital channels.

    I heard about the GUS sometime in 1992 through one of the dominant online services at the time, Prodigy. Through the message boards, I learned of a promotion with Electronic Arts in which customers could pre-order a GUS at a certain discount along with 2 EA games from a selected catalog (with progressive discounts when ordering more games from the list). I know I got the DOS version of PowerMonger ; I think the other was Night Shift, though that doesn’t seem to be an EA title.

    Anyway, 1992 saw many maddening delays of the GUS hardware. Finally, reports of GUS shipments began to trickle into the Prodigy message forums. Then one day in November, 1992, mine arrived. Into the 286 machine it went and a valiant attempt at software installation was made. A friend and I fought with the software late into the evening, trying to make this thing work reasonably. I remember grabbing a pair of old headphones sitting near the computer that were used for an ancient (even for the time) portable radio. That was the only means of sound reproduction we had available at that moment. And it still sounded incredible.

    After graduating to progressively superior headphones, I would later return to that original pair only to feel my ears were being physically assaulted. Strange, they sounded fine that first night I was trying to make the GUS work. I guess this was my first understanding that the degree to which one is a snobby audiophile is all a matter of hard-earned experience.

    Technology
    The GUS was powered by something called a GF1 which was supposed to use a technology called wavetable synthesis. In the early days, I thought (and I wasn’t alone in this) that this meant that the GF1 chip had a bunch of digitized instrument samples stored in the ASIC. That wasn’t it.

    However, it did feature 32 digital channels at a time when most PC audio cards had 2 (plus that Yamaha FM synthesizer). There was some hemming and hawing about how the original GUS couldn’t drive all 32 channels at a full 44.1 kHz ("CD quality") playback rate. It’s true— if 14 channels were enabled, all could be played at 44.1 kHz. Enabling more channels started progressive degradation and with all 32 channels, each was only playing at around 19 kHz. Still, from my emerging game programmer perspective, that allowed for 8-channel tracker music and 6 channels of sound effects, all at the vaunted CD level of quality.

    Games and Compatibility
    The primary reason to have a discrete sound card was for entertainment applications — ahem, games. GUS support was pretty sketchy out of the gate (ostensibly a major reason for the card’s delay). While many sound cards offered Sound Blaster emulation by basically having the same hardware as Sound Blaster cards, the GUS took a software route towards emulating the SB. To do this required a program called the Sound Blaster Operating System, or SBOS.

    Oh, how awesome it was to hear the program exclaim "SBOS installed !" And how harshly it grated on your nerves after the 200th time hearing it due to so many reboots and fiddling with options to make your games work. Also, I’ve always wondered if there’s something special about sampling an ’s’ sound — does it strain the sampling frequency range ? Perhaps the phrase was sampled at too low a bitrate because the ’s’ sounds didn’t come through very clearly, which is something you notice after hundreds of iterations when there are 3 ’s’ sounds in the phrase.

    Fortunately, SBOS became less relevant with the advent of Mega-Em, a separate emulator which intercepted calls to Roland MIDI systems and routed them to the very capable GUS. Roland-supporting games sounded beautiful.

    Eventually, more and more DOS games were released with native Gravis support, sometimes with the help of The Miles Sound System (from our friends at Rad Game Tools — you know, the people behind Smacker and Bink). The library changelog is quite the trip down PC memory lane.

    An important area where the GUS shined brightly was that of demos and music trackers. The emerging PC demo scene embraced the powerful GUS (aided, no doubt, by Gravis’ sponsorship of the community) and the coolest computer art and music of the time natively supported the card.

    Programming
    At this point in my life, I was a budding programmer in high school and was fairly intent on programming video games. So far, I had figured out how to make a few blips using a borrowed Sound Blaster card. I went to great lengths to learn how to program the Gravis Ultrasound.

    Oh you kids today, with your easy access to information at the tips of your fingers thanks to Google and the broader internet. I had to track down whatever information I could find through a combination of Prodigy message boards and local dialup BBSes and FidoNet message bases. Gravis was initially tight-lipped about programming information for its powerful card, as was de rigueur of hardware companies (something that largely persists to this day). But Gravis eventually saw an opportunity to one-up encumbent Creative Labs and released a full SDK for the Ultrasound. I wanted the SDK badly.

    So it was early-mid 1993. Gravis released an SDK. I heard that it was available on their support BBS. Their BBS with a long distance phone number. If memory serves, the SDK was only in the neighborhood of 1.5 Mbytes. That takes a long time to transfer via a 2400 baud modem at a time when long distance phone charges were still a thing and not insubstantial.

    Luckily, they also put the SDK on something called an ’FTP site’. Fortunately, about this time, I had the opportunity to get some internet access via the local university.

    Indeed, my entire motivation for initially wanting to get on the internet was to obtain special programming information. Is that nerdy enough for you ?

    I see that the GUS SDK is still available via the Gravis FTP site. The file GUSDK222.ZIP is dated 1998 and is less than a megabyte.

    Next Generation : CD Support
    So I had my original GUS by the end of 1992. That was just the first iteration of the Gravis Ultrasound. The next generation was the GUS MAX. When I was ready to get into the CD-ROM era, this was what I wanted in my computer. This is because the GUS MAX had CD-ROM support. This is odd to think about now when all optical drives have SATA interfaces and (P)ATA interfaces before that— what did CD-ROM compatibility mean back then ? I wasn’t quite sure. But in early 1995, I headed over to Computer City (R.I.P.) and bought a new GUS MAX and Sony double-speed CD-ROM drive to install in the family’s PC.



    About the "CD-ROM compatibility" : It seems that there were numerous competing interfaces in the early days of CD-ROM technology. The GUS MAX simply integrated 3 different CD-ROM controllers onto the audio card. This was superfluous to me since the Sony drive came with an appropriate controller card anyway, though I didn’t figure out that the extra controller card was unnecessary until after I installed it. No matter ; computers of the day were rife with expansion ports.



    The 3 different CD-ROM controllers on the GUS MAX

    Explaining The Difference
    It was difficult to explain the difference in quality to those who didn’t really care. Sometime during 1995, I picked up a quasi-promotional CD-ROM called "The Gravis Ultrasound Experience" from Babbage’s computer store (remember when that was a thing ?). As most PC software had been distributed on floppy discs up until this point, this CD-ROM was an embarrassment of riches. Tons of game demos, scene demos, tracker music, and all the latest GUS drivers and support software.

    Further, the CD-ROM had a number of red book CD audio tracks that illustrated the difference between Sound Blaster cards and the GUS. I remember loaning this to a tech-savvy coworker who disbelieved how awesome the GUS was. The coworker took it home, listened to it, and wholly agreed that the GUS audio sounded better than the SB audio in the comparison — and was thoroughly confused because she was hearing this audio emanating from her Sound Blaster. It was the difference between real-time and pre-rendered audio, I suppose, but I failed to convey that message. I imagine the same issue comes up even today regarding real-time video rendering vs., e.g., a pre-rendered HD cinematic posted on YouTube.

    Regrettably, I can’t find that CD-ROM anymore which leads me to believe that the coworker never gave it back. Too bad, because it was quite the treasure trove.

    Aftermath
    According to folklore I’ve heard, Gravis couldn’t keep up as the world changed to Windows and failed to deliver decent drivers. Indeed, I remember trying to keep my GUS in service under Windows 95 well into 1998 but eventually relented and installed some kind of more appropriate sound card that was better supported under Windows.

    Of course, audio output capability has been standard issue for any PC for at least 10 years and many people aren’t even aware that discrete sound cards still exist. Real-time audio rendering has become less essential as full musical tracks can be composed and compressed into PCM format and delivered with the near limitless space afforded by optical storage.

    A few years ago, it was easy to pick up old GUS cards on eBay for cheap. As of this writing, there are only a few and they’re pricy (but perhaps not selling). Maybe I was just viewing during the trough of no value a few years ago.

    Nowadays, of course, anyone interested in studying the old GUS or getting a nostalgia fix need only boot up the always-excellent DOSBox emulator which provides remarkable GUS emulation support.

  • How to Choose the Optimal Multi-Touch Attribution Model for Your Organisation

    13 mars 2023, par Erin — Analytics Tips

    If you struggle to connect the dots on your customer journeys, you are researching the correct solution. 

    Multi-channel attribution models allow you to better understand the users’ paths to conversion and identify key channels and marketing assets that assist them.

    That said, each attribution model has inherent limitations, which make the selection process even harder.

    This guide explains how to choose the optimal multi-touch attribution model. We cover the pros and cons of popular attribution models, main evaluation criteria and how-to instructions for model implementation. 

    Pros and Cons of Different Attribution Models 

    Types of Attribution Models

    First Interaction 

    First Interaction attribution model (also known as first touch) assigns full credit to the conversion to the first channel, which brought in a lead. However, it doesn’t report other interactions the visitor had before converting.

    Marketers, who are primarily focused on demand generation and user acquisition, find the first touch attribution model useful to evaluate and optimise top-of-the-funnel (ToFU). 

    Pros 

    • Reflects the start of the customer journey
    • Shows channels that bring in the best-qualified leads 
    • Helps track brand awareness campaigns

    Cons 

    • Ignores the impact of later interactions at the middle and bottom of the funnel 
    • Doesn’t provide a full picture of users’ decision-making process 

    Last Interaction 

    Last Interaction attribution model (also known as last touch) shifts the entire credit allocation to the last channel before conversion. But it doesn’t account for the contribution of all other channels. 

    If your focus is conversion optimization, the last-touch model helps you determine which channels, assets or campaigns seal the deal for the prospect. 

    Pros 

    • Reports bottom-of-the-funnel events
    • Requires minimal data and configurations 
    • Helps estimate cost-per-lead or cost-per-acquisition

    Cons 

    • No visibility into assisted conversions and prior visitor interactions 
    • Overemphasise the importance of the last channel (which can often be direct traffic) 

    Last Non-Direct Interaction 

    Last Non-Direct attribution excludes direct traffic from the calculation and assigns the full conversion credit to the preceding channel. For example, a paid ad will receive 100% of credit for conversion if a visitor goes directly to your website to buy a product. 

    Last Non-Direct attribution provides greater clarity into the bottom-of-the-funnel (BoFU). events. Yet, it still under-reports the role other channels played in conversion. 

    Pros 

    • Improved channel visibility, compared to Last-Touch 
    • Avoids over-valuing direct visits
    • Reports on lead-generation efforts

    Cons 

    • Doesn’t work for account-based marketing (ABM) 
    • Devalues the quality over quantity of leads 

    Linear Model

    Linear attribution model assigns equal credit for a conversion to all tracked touchpoints, regardless of their impact on the visitor’s decision to convert.

    It helps you understand the full conversion path. But this model doesn’t distinguish between the importance of lead generation activities versus nurturing touches.

    Pros 

    • Focuses on all touch points associated with a conversion 
    • Reflects more steps in the customer journey 
    • Helps analyse longer sales cycles

    Cons 

    • Doesn’t accurately reflect the varying roles of each touchpoint 
    • Can dilute the credit if too many touchpoints are involved 

    Time Decay Model 

    Time decay models assumes that the closer a touchpoint is to the conversion, the greater its influence. Pre-conversion touchpoints get the highest credit, while the first ones are ranked lower (5%-5%-10%-15%-25%-30%).

    This model better reflects real-life customer journeys. However, it devalues the impact of brand awareness and demand-generation campaigns. 

    Pros 

    • Helps track longer sales cycles and reports on each touchpoint involved 
    • Allows customising the half-life of decay to improve reporting 
    • Promotes conversion optimization at BoFu stages

    Cons 

    • Can prompt marketers to curtail ToFU spending, which would translate to fewer qualified leads at lower stages
    • Doesn’t reflect highly-influential events at earlier stages (e.g., a product demo request or free account registration, which didn’t immediately lead to conversion)

    Position-Based Model 

    Position-Based attribution model (also known as the U-shaped model) allocates the biggest credit to the first and the last interaction (40% each). Then distributes the remaining 20% across other touches. 

    For many marketers, that’s the preferred multi-touch attribution model as it allows optimising both ToFU and BoFU channels. 

    Pros 

    • Helps establish the main channels for lead generation and conversion
    • Adds extra layers of visibility, compared to first- and last-touch attribution models 
    • Promotes budget allocation toward the most strategic touchpoints

    Cons 

    • Diminishes the importance of lead nurturing activities as more credit gets assigned to demand-gen and conversion-generation channels
    • Limited flexibility since it always assigns a fixed amount of credit to the first and last touchpoints, and the remaining credit is divided evenly among the other touchpoints

    How to Choose the Right Multi-Touch Attribution Model For Your Business 

    If you’re deciding which attribution model is best for your business, prepare for a heated discussion. Each one has its trade-offs as it emphasises or devalues the role of different channels and marketing activities.

    To reach a consensus, the best strategy is to evaluate each model against three criteria : Your marketing objectives, sales cycle length and data availability. 

    Marketing Objectives 

    Businesses generate revenue in many ways : Through direct sales, subscriptions, referral fees, licensing agreements, one-off or retainer services. Or any combination of these activities. 

    In each case, your marketing strategy will look different. For example, SaaS and direct-to-consumer (DTC) eCommerce brands have to maximise both demand generation and conversion rates. In contrast, a B2B cybersecurity consulting firm is more interested in attracting qualified leads (as opposed to any type of traffic) and progressively nurturing them towards a big-ticket purchase. 

    When selecting a multi-touch attribution model, prioritise your objectives first. Create a simple scoreboard, where your team ranks various channels and campaign types you rely on to close sales. 

    Alternatively, you can survey your customers to learn how they first heard about your company and what eventually triggered their conversion. Having data from both sides can help you cross-validate your assumptions and eliminate some biases. 

    Then consider which model would best reflect the role and importance of different channels in your sales cycle. Speaking of which….

    Sales Cycle Length 

    As shoppers, we spend less time deciding on a new toothpaste brand versus contemplating a new IT system purchase. Factors like industry, business model (B2C, DTC, B2B, B2BC), and deal size determine the average cycle length in your industry. 

    Statistically, low-ticket B2C sales can happen within just several interactions. The average B2B decision-making process can have over 15 steps, spread over several months. 

    That’s why not all multi-touch attribution models work equally well for each business. Time-decay suits better B2B companies, while B2C usually go for position-based or linear attribution. 

    Data Availability 

    Businesses struggle with multi-touch attribution model implementation due to incomplete analytics data. 

    Our web analytics tool captures more data than Google Analytics. That’s because we rely on a privacy-focused tracking mechanism, which allows you to collect analytics without showing a cookie consent banner in markets outside of Germany and the UK. 

    Cookie consent banners are mandatory with Google Analytics. Yet, almost 40% of global consumers reject it. This results in gaps in your analytics and subsequent inconsistencies in multi-touch attribution reports. With Matomo, you can compliantly collect more data for accurate reporting. 

    Some companies also struggle to connect collected insights to individual shoppers. With Matomo, you can cross-attribute users across browning sessions, using our visitors’ tracking feature

    When you already know a user’s identifier (e.g., full name or email address), you can track their on-site behaviours over time to better understand how they interact with your content and complete their purchases. Quick disclaimer, though, visitors’ tracking may not be considered compliant with certain data privacy laws. Please consult with a local authority if you have doubts. 

    How to Implement Multi-Touch Attribution

    Multi-touch attribution modelling implementation is like a “seek and find” game. You have to identify all significant touchpoints in your customers’ journeys. And sometimes also brainstorm new ways to uncover the missing parts. Then figure out the best way to track users’ actions at those stages (aka do conversion and events tracking). 

    Here’s a step-by-step walkthrough to help you get started. 

    Select a Multi-Touch Attribution Tool 

    The global marketing attribution software is worth $3.1 billion. Meaning there are plenty of tools, differing in terms of accuracy, sophistication and price.

    To make the right call prioritise five factors :

    • Available models : Look for a solution that offers multiple options and allows you to experiment with different modelling techniques or develop custom models. 
    • Implementation complexity : Some providers offer advanced data modelling tools for creating custom multi-touch attribution models, but offer few out-of-the-box modelling options. 
    • Accuracy : Check if the shortlisted tool collects the type of data you need. Prioritise providers who are less dependent on third-party cookies and allow you to identify repeat users. 
    • Your marketing stack : Some marketing attribution tools come with useful add-ons such as tag manager, heatmaps, form analytics, user session recordings and A/B testing tools. This means you can collect more data for multi-channel modelling with them instead of investing in extra software. 
    • Compliance : Ensure that the selected multi-attribution analytics software wouldn’t put you at risk of GDPR non-compliance when it comes to user privacy and consent to tracking/analysis. 

    Finally, evaluate the adoption costs. Free multi-channel analytics tools come with data quality and consistency trade-offs. Premium attribution tools may have “hidden” licensing costs and bill you for extra data integrations. 

    Look for a tool that offers a good price-to-value ratio (i.e., one that offers extra perks for a transparent price). 

    Set Up Proper Data Collection 

    Multi-touch attribution requires ample user data. To collect the right type of insights you need to set up : 

    • Website analytics : Ensure that you have all tracking codes installed (and working correctly !) to capture pageviews, on-site actions, referral sources and other data points around what users do on page. 
    • Tags : Add tracking parameters to monitor different referral channels (e.g., “facebook”), campaign types (e.g., ”final-sale”), and creative assets (e.g., “banner-1”). Tags help you get a clearer picture of different touchpoints. 
    • Integrations : To better identify on-site users and track their actions, you can also populate your attribution tool with data from your other tools – CRM system, A/B testing app, etc. 

    Finally, think about the ideal lookback window — a bounded time frame you’ll use to calculate conversions. For example, Matomo has a default windows of 7, 30 or 90 days. But you can configure a custom period to better reflect your average sales cycle. For instance, if you’re selling makeup, a shorter window could yield better results. But if you’re selling CRM software for the manufacturing industry, consider extending it.

    Configure Goals and Events 

    Goals indicate your main marketing objectives — more traffic, conversions and sales. In web analytics tools, you can measure these by tracking specific user behaviours. 

    For example : If your goal is lead generation, you can track :

    • Newsletter sign ups 
    • Product demo requests 
    • Gated content downloads 
    • Free trial account registration 
    • Contact form submission 
    • On-site call bookings 

    In each case, you can set up a unique tag to monitor these types of requests. Then analyse conversion rates — the percentage of users who have successfully completed the action. 

    To collect sufficient data for multi-channel attribution modelling, set up Goal Tracking for different types of touchpoints (MoFU & BoFU) and asset types (contact forms, downloadable assets, etc). 

    Your next task is to figure out how users interact with different on-site assets. That’s when Event Tracking comes in handy. 

    Event Tracking reports notify you about specific actions users take on your website. With Matomo Event Tracking, you can monitor where people click on your website, on which pages they click newsletter subscription links, or when they try to interact with static content elements (e.g., a non-clickable banner). 

    Using in-depth user behavioural reports, you can better understand which assets play a key role in the average customer journey. Using this data, you can localise “leaks” in your sales funnel and fix them to increase conversion rates.

    Test and Validated the Selected Model 

    A common challenge of multi-channel attribution modelling is determining the correct correlation and causality between exposure to touchpoints and purchases. 

    For example, a user who bought a discounted product from a Facebook ad would act differently than someone who purchased a full-priced product via a newsletter link. Their rate of pre- and post-sales exposure will also differ a lot — and your attribution model may not always accurately capture that. 

    That’s why you have to continuously test and tweak the selected model type. The best approach for that is lift analysis. 

    Lift analysis means comparing how your key metrics (e.g., revenue or conversion rates) change among users who were exposed to a certain campaign versus a control group. 

    In the case of multi-touch attribution modelling, you have to monitor how your metrics change after you’ve acted on the model recommendations (e.g., invested more in a well-performing referral channel or tried a new brand awareness Twitter ad). Compare the before and after ROI. If you see a positive dynamic, your model works great. 

    The downside of this approach is that you have to invest a lot upfront. But if your goal is to create a trustworthy attribution model, the best way to validate is to act on its suggestions and then test them against past results. 

    Conclusion

    A multi-touch attribution model helps you measure the impact of different channels, campaign types, and marketing assets on metrics that matter — conversion rate, sales volumes and ROI. 

    Using this data, you can invest budgets into the best-performing channels and confidently experiment with new campaign types. 

    As a Matomo user, you also get to do so without breaching customers’ privacy or compromising on analytics accuracy.

    Start using accurate multi-channel attribution in Matomo. Get your free 21-day trial now. No credit card required.

  • How can I combine HDR video with a still PNG in FFMPEG without messing up colors ?

    2 juin 2022, par Robert

    I'm trying to add a still image to the end of a video as a title-card. Normally this is simple, but the video is HDR (shot on an iPhone) and for reasons I don't understand that's causing the colors in the PNG to go nuts.

    


    Here's the original png :
Original colors

    


    Here's a screenshot from the end of the output video :
Changed colors

    


    Clearly something is amiss. I made the PNG image in Photoshop from a photo, and I've tried it with and without embedding the color profile without any noticeable difference.

    


    Here's the command for ffmpeg :

    


    ffmpeg -y\
 -t 5 -i '../inputs/video.MOV'\
 -loop 1 -t 5 -i '../../common/end-youtube.png'\
 -filter_complex '
[1:v] fade=type=in:duration=1:alpha=1, setpts=PTS-STARTPTS+5/TB[end];
[0:v][end] overlay [outV]
'\
 -map [outV] '../test.mp4'


    


    I've tried this with non-HDR videos and the colors look perfectly normal, so clearly the HDRness is somehow involved. I've also tried adding this line to set the colorspace etc of the output. It helps with the PNG (although it's not perfect) but then the video is washed out.

    


     -colorspace bt709 -color_trc bt709 -color_primaries bt709 -color_range mpeg \


    


    I don't really care if the output video is HDR or not, as long as the video and png parts both look visually close to the inputs. This is something I'll need to do to many videos, so if possible a solution that doesn't involve tweaking colors for each one would be ideal.