Recherche avancée

Médias (91)

Autres articles (68)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

Sur d’autres sites (5558)

  • 7 Ecommerce Metrics to Track and Improve in 2024

    12 avril 2024, par Erin

    You can invest hours into market research, create the best ads you’ve ever seen and fine-tune your budgets. But the only way to really know if your digital marketing campaigns move the needle is to track ecommerce metrics.

    It’s time to put your hopes and gut feelings aside and focus on the data. Ecommerce metrics are key performance indicators that can tell you a lot about the performance of a single campaign, a traffic source or your entire marketing efforts. 

    That’s why it’s essential to understand what ecommerce metrics are, key metrics to track and how to improve them. 

    Ready to do all of the above ? Then, let’s get started.

    What are ecommerce metrics ? 

    An ecommerce metric is any metric that helps you understand the effectiveness of your digital marketing efforts and the extent to which users are taking a desired action. Most ecommerce metrics focus on conversions, which could be anything from making a purchase to subscribing to your email list.

    You need to track ecommerce metrics to understand how well your marketing efforts are working. They are essential to helping you run a cost-effective marketing campaign that delivers a return on investment. 

    For example, tracking ecommerce metrics will help you identify whether your digital marketing campaigns are generating a return on investment or whether they are actually losing money. They also help you identify your most effective campaigns and traffic sources. 

    Ecommerce metrics also help you spot opportunities for improvement both in terms of your marketing campaigns and your site’s UX. 

    For instance, you can use ecommerce metrics to track the impact on revenue of A/B tests on your marketing campaigns. Or you can use them to understand how users interact with your website and what, if anything, you can do to make it more engaging.

    What’s the difference between conversion rate and conversion value ?

    The difference between a conversion rate and a conversion value is that the former is a percentage while the latter is a monetary value. 

    There can be confusion between the terms conversion rate and conversion value. Since conversions are core metrics in ecommerce, it’s worth taking a minute to clarify. 

    Conversion rates measure the percentage of people who take a desired action on your website compared to the total number of visitors. If you have 100 visitors and one of them converts, then your conversion rate is 1%. 

    Here’s the formula for calculating your conversion rate :

    Conversion Rate (%) = (Number of conversions / Total number of visitors) × 100

    Conversion rate formula

    Using the example above :

    Conversion Rate = (1 / 100) × 100 = 1%

    Conversion value is a monetary amount you assign to each conversion. In some cases, this is the price of the product a user purchases. In other conversion events, such as signing up for a free trial, you may wish to assign a hypothetical conversion value. 

    To calculate a hypothetical conversion value, let’s consider that you have estimated the average revenue generated from a paying customer is $300. If the conversion rate from free trial to paying customer is 20%, then the hypothetical conversion value for each free trial signup would be $300 multiplied by 20%, which equals $60. This takes into account the number of free trial users who eventually become paying customers.

    So the formula for hypothetical conversion value looks like this :

    Hypothetical conversion value formula

    Hypothetical conversion value = (Average revenue per paying customer) × (Conversion rate)

    Using the values from our example :

    Hypothetical conversion value = $300 × 20% = $60

    The most important ecommerce metrics and how to track them

    There are dozens of ecommerce metrics you could track, but here are seven of the most important. 

    Conversion rate

    Conversion rate is the percentage of visitors who take a desired action. It is arguably one of the most important ecommerce metrics and a great top-level indicator of the success of your marketing efforts. 

    You can measure the conversion rate of anything, including newsletter signups, ebook downloads, and product purchases, using the following formula :

    Conversion rate

    Conversion rate = (Number of people who took action / Total number of visitors) × 100

    You usually won’t have to manually calculate your conversion rate, though. Almost every web analytics or ad platform will track the conversion rate automatically.

    Matomo, for instance, automatically tracks any conversion you set in the Goals report.

    A screenshot of Matomo's Goals report

    As you can see in the screenshot, your site’s conversions are plotted over a period of time and the conversion rate is tracked below the graph. You can change the time period to see how your conversion rate fluctuates.

    If you want to go even further, track your new visitor conversion rate to see how engaging your site is to first-time visitors. 

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    Cost per acquisition

    Cost per acquisition (CPA) is the average cost of acquiring a new user. You can calculate your overall CPA or you can break CPA down by email campaign, traffic source, or any other criteria. 

    Calculate CPA by dividing your total marketing cost by the number of new users you acquire.

    Cost per acquisition = Total marketing cost / Number of customers acquired

    CPA = Total marketing cost​ / Number of new users acquired 

    So if your Google Ads campaign costs €1,000 and you acquire 100 new users, your CPA is €10 (1000/100=10).

    It’s important to note that CPA is not the same as customer acquisition cost. Customer acquisition cost considers the number of paying customers. CPA looks at the number of users taking a certain action, like subscribing to a newsletter, making a purchase, or signing up for a free trial.

    Cost per acquisition is a direct measure of your marketing efforts’ effectiveness, especially when comparing CPA to average customer spend and return on ad spend. 

    If your CPA is higher than the average customer spend, your marketing campaign is profitable. If not, then you can look at ways to either increase customer spend or decrease your cost per acquisition.

    Customer lifetime value

    Customer lifetime value (CLV) is the average amount of money a customer will spend with your ecommerce brand over their lifetime. 

    Customer value is the total worth of a customer to your brand based on their purchasing behaviour. To calculate it, multiply the average purchase value by the average number of purchases. For instance, if the average purchase value is €50 and customers make 5 purchases on average, the customer value would be €250.

    Use this formula to calculate customer value :

    Customer value = Average purchase value × Average number of purchases

    Customer value = Average purchase value × Average number of purchases

    Then you can calculate customer lifetime value using the following formula :

    Customer lifetime value = Customer value * Average customer lifespan

    CLV = Customer value × Average customer lifespan

    In another example, let’s say you have a software company and customers pay you €500 per year for an annual subscription. If the average customer lifespan is 5 years, then the Customer Lifetime Value (CLV) would be €2,500.

    Customer lifetime value = €500 × 5 = €2,500

    Knowing how much potential customers are likely to spend helps you set accurate marketing budgets and optimise the price of your products. 

    Return on investment

    Return on investment (ROI) is the amount of revenue your marketing efforts generate compared to total spend. 

    It’s usually calculated as a percentage using the following formula :

    Return On Investment = (Revenue / Total Spend) x 100

    ROI = (Revenue / Total spend) × 100

    If you spend €1,000 on a paid ad campaign and your efforts bring in €5,000, then your ROI is 500% (5,000/1,000 × 100).

    With a web analytics tool like Matomo, you can quickly see the revenue generated from each traffic source and you can drill down further to compare different social media channels, search engines, referral websites and campaigns to get more granular view. 

    Revenue by channel in Matomo

    In the example above in Matomo’s Marketing Attribution feature, we can see that social networks are generating the highest amount of revenue in the year. To calculate ROI, we would need to compare the amount of investment to each channel. 

    Let’s say we invested $1,000 per year in search engine optimisation and content marketing, the return on investment (ROI) stands at approximately 2576%, based on a revenue of $26,763.48 per year. 

    Conversely, for organic social media campaigns, where $5,000 was invested and revenue amounted to $71,180.22 per year, the ROI is approximately 1323%. 

    Despite differences in revenue generation, both channels exhibit significant returns on investment, with SEO and content marketing demonstrating a much higher ROI compared to organic social media campaigns. 

    With that in mind, we might want to consider shifting our marketing budget to focus more on search engine optimisation and content marketing as it’s a greater return on investment.

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    Return on ad spend

    Return on ad spend (ROAS) is similar to return on investment, but it measures the profitability of a specific ad or campaign.

    Calculate ROAS using the following formula :

    Return on ad Spend = revenue / ad cost

    ROAS = Revenue / Ad cost 

    A positive ROAS means you are making money. If you generate €3 for every €1 you spend on advertising, for example, there’s no reason to turn off that campaign. If you only make €1 for every €2 you spend, however, then you need to shut down the campaign or optimise it. 

    Bounce rate

    Bounce rate is the percentage of visitors who leave your site without taking another action. Calculate it using the following formula :

    Bounce rate = (Number of visitors who bounce / Total number of visitors) * 100

    Bounce rate = (Number of visitors who bounce / Total number of visitors) × 100

    Some portion of users will always leave your site immediately, but you should aim to make your bounce rate as low as possible. After all, every customer that bounces is a missed opportunity that you may never get again. 

    You can check the bounce rate for each one of your site’s pages using Matomo’s page analytics report. Web analytics tools like Google Analytics can track bounce rates for online stores also. 

    A screenshot of Matomo's page view report A screenshot of Matomo's page view report

    Bounce rate is calculated automatically. You can sort the list of pages by bounce rate allowing you to prioritise your optimisation efforts. 

    Don’t stop there, though. Explore bounce rate further by comparing your mobile bounce rate vs. desktop bounce rate by segmenting your traffic. This will highlight whether your mobile site needs improving. 

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    Click-through rate

    Your clickthrough rate (CTR) tells you the number of people who click on your ads as a percentage of total impressions. You can calculate it by dividing the number of clicks your ad gets by the total number of times people see it. 

    So the formula looks like this :

    Click-through Rate = (Number of clicks / Total impressions) × 100

    CTR (%) = (Number of clicks / Total impressions​) × 100

    If an ad gets 1,000 impressions and 10 people click on it, then the CTR will be 10/1,000 × 100 = 1%

    You don’t usually need to calculate your clickthrough rate manually, however. Most ad platforms like Google Ads will automatically calculate CTR.

    What is considered a good ecommerce sales conversion rate ?

    This question is so broad it’s almost impossible to answer. The thing is, sales conversion rates vary massively depending on the conversion event and the industry. A good conversion rate in one industry might be terrible in another. 

    That being said, research shows that the average website conversion rate across all industries is 2.35%. Of course, some websites convert much better than this. The same study found that the top 25% of websites across all industries have a conversion rate of 5.31% or higher. 

    How can you improve your conversion rate ?

    Ecommerce metrics don’t just let you track your campaign’s ROI, they help you identify ways to improve your campaign. 

    Use these five tips to start improving your marketing campaign’s conversion rates today :

    Run A/B tests

    The most effective way to improve almost all of the ecommerce metrics you track is to test, test, and test again.

    A/B testing or multivariate testing compares two different versions of the same content, such as a landing page or blog post. Seeing which version performs better can help you squeeze as many conversions as possible from your website and ad campaigns. But only if you test as many things as possible. This should include :

    • Ad placement
    • Ad copy
    • CTAs
    • Headlines
    • Straplines
    • Colours
    • Design

    To create and analyse tests and their results effectively, you’ll need either an A/B testing platform or a web analytics solution like Matomo, which offers one out of the box.

    A/B testing in Matomo analytics

    Matomo’s A/B Testing feature makes it easy to create and track tests over time, breaking down each test’s variations by the metrics that matter. It automatically calculates statistical significance, too, meaning you can be sure you’re making a change for the better. 

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

  • WebVTT as a W3C Recommendation

    2 décembre 2013, par silvia

    Three weeks ago I attended TPAC, the annual meeting of W3C Working Groups. One of the meetings was of the Timed Text Working Group (TT-WG), that has been specifying TTML, the Timed Text Markup Language. It is now proposed that WebVTT be also standardised through the same Working Group.

    How did that happen, you may ask, in particular since WebVTT and TTML have in the past been portrayed as rival caption formats ? How will the WebVTT spec that is currently under development in the Text Track Community Group (TT-CG) move through a Working Group process ?

    I’ll explain first why there is a need for WebVTT to become a W3C Recommendation, and then how this is proposed to be part of the Timed Text Working Group deliverables, and finally how I can see this working between the TT-CG and the TT-WG.

    Advantages of a W3C Recommendation

    TTML is a XML-based markup format for captions developed during the time that XML was all the hotness. It has become a W3C standard (a so-called “Recommendation”) despite not having been implemented in any browsers (if you ask me : that’s actually a flaw of the W3C standardisation process : it requires only two interoperable implementations of any kind – and that could be anyone’s JavaScript library or Flash demonstrator – it doesn’t actually require browser implementations. But I digress…). To be fair, a subpart of TTML is by now implemented in Internet Explorer, but all the other major browsers have thus far rejected proposals of implementation.

    Because of its Recommendation status, TTML has become the basis for several other caption standards that other SDOs have picked : the SMPTE’s SMPTE-TT format, the EBU’s EBU-TT format, and the DASH Industry Forum’s use of SMPTE-TT. SMPTE-TT has also become the “safe harbour” format for the US legislation on captioning as decided by the FCC. (Note that the FCC requirements for captions on the Web are actually based on a list of features rather than requiring a specific format. But that will be the topic of a different blog post…)

    WebVTT is much younger than TTML. TTML was developed as an interchange format among caption authoring systems. WebVTT was built for rendering in Web browsers and with HTML5 in mind. It meets the requirements of the <track> element and supports more than just captions/subtitles. WebVTT is popular with browser developers and has already been implemented in all major browsers (Firefox Nightly is the last to implement it – all others have support already released).

    As we can see and as has been proven by the HTML spec and multiple other specs : browsers don’t wait for specifications to have W3C Recommendation status before they implement them. Nor do they really care about the status of a spec – what they care about is whether a spec makes sense for the Web developer and user communities and whether it fits in the Web platform. WebVTT has obviously achieved this status, even with an evolving spec. (Note that the spec tries very hard not to break backwards compatibility, thus all past implementations will at least be compatible with the more basic features of the spec.)

    Given that Web browsers don’t need WebVTT to become a W3C standard, why then should we spend effort in moving the spec through the W3C process to become a W3C Recommendation ?

    The modern Web is now much bigger than just Web browsers. Web specifications are being used in all kinds of devices including TV set-top boxes, phone and tablet apps, and even unexpected devices such as white goods. Videos are increasingly omnipresent thus exposing deaf and hard-of-hearing users to ever-growing challenges in interacting with content on diverse devices. Some of these devices will not use auto-updating software but fixed versions so can’t easily adapt to new features. Thus, caption producers (both commercial and community) need to be able to author captions (and other video accessibility content as defined by the HTML5
    element) towards a feature set that is clearly defined to be supported by such non-updating devices.

    Understandably, device vendors in this space have a need to build their technology on standardised specifications. SDOs for such device technologies like to reference fixed specifications so the feature set is not continually updating. To reference WebVTT, they could use a snapshot of the specification at any time and reference that, but that’s not how SDOs work. They prefer referencing an officially sanctioned and tested version of a specification – for a W3C specification that means creating a W3C Recommendation of the WebVTT spec.

    Taking WebVTT on a W3C recommendation track is actually advantageous for browsers, too, because a test suite will have to be developed that proves that features are implemented in an interoperable manner. In summary, I can see the advantages and personally support the effort to take WebVTT through to a W3C Recommendation.

    Choice of Working Group

    FAIK this is the first time that a specification developed in a Community Group is being moved into the recommendation track. This is something that has been expected when the W3C created CGs, but not something that has an established process yet.

    The first question of course is which WG would take it through to Recommendation ? Would we create a new Working Group or find an existing one to move the specification through ? Since WGs involve a lot of overhead, the preference was to add WebVTT to the charter of an existing WG. The two obvious candidates were the HTML WG and the TT-WG – the first because it’s where WebVTT originated and the latter because it’s the closest thematically.

    Adding a deliverable to a WG is a major undertaking. The TT-WG is currently in the process of re-chartering and thus a suggestion was made to add WebVTT to the milestones of this WG. TBH that was not my first choice. Since I’m already an editor in the HTML WG and WebVTT is very closely related to HTML and can be tested extensively as part of HTML, I preferred the HTML WG. However, adding WebVTT to the TT-WG has some advantages, too.

    Since TTML is an exchange format, lots of captions that will be created (at least professionally) will be in TTML and TTML-related formats. It makes sense to create a mapping from TTML to WebVTT for rendering in browsers. The expertise of both, TTML and WebVTT experts is required to develop a good mapping – as has been shown when we developed the mapping from CEA608/708 to WebVTT. Also, captioning experts are already in the TT-WG, so it helps to get a second set of eyes onto WebVTT.

    A disadvantage of moving a specification out of a CG into a WG is, however, that you potentially lose a lot of the expertise that is already involved in the development of the spec. People don’t easily re-subscribe to additional mailing lists or want the additional complexity of involving another community (see e.g. this email).

    So, a good process needs to be developed to allow everyone to contribute to the spec in the best way possible without requiring duplicate work. How can we do that ?

    The forthcoming process

    At TPAC the TT-WG discussed for several hours what the next steps are in taking WebVTT through the TT-WG to recommendation status (agenda with slides). I won’t bore you with the different views – if you are keen, you can read the minutes.

    What I came away with is the following process :

    1. Fix a few more bugs in the CG until we’re happy with the feature set in the CG. This should match the feature set that we realistically expect devices to implement for a first version of the WebVTT spec.
    2. Make a FSA (Final Specification Agreement) in the CG to create a stable reference and a clean IPR position.
    3. Assuming that the TT-WG’s charter has been approved with WebVTT as a milestone, we would next bring the FSA specification into the TT-WG as FPWD (First Public Working Draft) and immediately do a Last Call which effectively freezes the feature set (this is possible because there has already been wide community review of the WebVTT spec) ; in parallel, the CG can continue to develop the next version of the WebVTT spec with new features (just like it is happening with the HTML5 and HTML5.1 specifications).
    4. Develop a test suite and address any issues in the Last Call document (of course, also fix these issues in the CG version of the spec).
    5. As per W3C process, substantive and minor changes to Last Call documents have to be reported and raised issues addressed before the spec can progress to the next level : Candidate Recommendation status.
    6. For the next step – Proposed Recommendation status – an implementation report is necessary, and thus the test suite needs to be finalized for the given feature set. The feature set may also be reduced at this stage to just the ones implemented interoperably, leaving any other features for the next version of the spec.
    7. The final step is Recommendation status, which simply requires sufficient support and endorsement by W3C members.

    The first version of the WebVTT spec naturally has a focus on captioning (and subtitling), since this has been the dominant use case that we have focused on this far and it’s the part that is the most compatibly implemented feature set of WebVTT in browsers. It’s my expectation that the next version of WebVTT will have a lot more features related to audio descriptions, chapters and metadata. Thus, this seems a good time for a first version feature freeze.

    There are still several obstacles towards progressing WebVTT as a milestone of the TT-WG. Apart from the need to get buy-in from the TT-WG, the TT-CG, and the AC (Adivisory Committee who have to approve the new charter), we’re also looking at the license of the specification document.

    The CG specification has an open license that allows creating derivative work as long as there is attribution, while the W3C document license for documents on the recommendation track does not allow the creation of derivative work unless given explicit exceptions. This is an issue that is currently being discussed in the W3C with a proposal for a CC-BY license on the Recommendation track. However, my view is that it’s probably ok to use the different document licenses : the TT-WG will work on WebVTT 1.0 and give it a W3C document license, while the CG starts working on the next WebVTT version under the open CG license. It probably actually makes sense to have a less open license on a frozen spec.

    Making the best of a complicated world

    WebVTT is now proposed as part of the recharter of the TT-WG. I have no idea how complicated the process will become to achieve a W3C WebVTT 1.0 Recommendation, but I am hoping that what is outlined above will be workable in such a way that all of us get to focus on progressing the technology.

    At TPAC I got the impression that the TT-WG is committed to progressing WebVTT to Recommendation status. I know that the TT-CG is committed to continue developing WebVTT to its full potential for all kinds of media-time aligned content with new kinds already discussed at FOMS. Let’s enable both groups to achieve their goals. As a consequence, we will allow the two formats to excel where they do : TTML as an interchange format and WebVTT as a browser rendering format.

  • WebVTT as a W3C Recommendation

    1er janvier 2014, par silvia

    Three weeks ago I attended TPAC, the annual meeting of W3C Working Groups. One of the meetings was of the Timed Text Working Group (TT-WG), that has been specifying TTML, the Timed Text Markup Language. It is now proposed that WebVTT be also standardised through the same Working Group.

    How did that happen, you may ask, in particular since WebVTT and TTML have in the past been portrayed as rival caption formats ? How will the WebVTT spec that is currently under development in the Text Track Community Group (TT-CG) move through a Working Group process ?

    I’ll explain first why there is a need for WebVTT to become a W3C Recommendation, and then how this is proposed to be part of the Timed Text Working Group deliverables, and finally how I can see this working between the TT-CG and the TT-WG.

    Advantages of a W3C Recommendation

    TTML is a XML-based markup format for captions developed during the time that XML was all the hotness. It has become a W3C standard (a so-called “Recommendation”) despite not having been implemented in any browsers (if you ask me : that’s actually a flaw of the W3C standardisation process : it requires only two interoperable implementations of any kind – and that could be anyone’s JavaScript library or Flash demonstrator – it doesn’t actually require browser implementations. But I digress…). To be fair, a subpart of TTML is by now implemented in Internet Explorer, but all the other major browsers have thus far rejected proposals of implementation.

    Because of its Recommendation status, TTML has become the basis for several other caption standards that other SDOs have picked : the SMPTE’s SMPTE-TT format, the EBU’s EBU-TT format, and the DASH Industry Forum’s use of SMPTE-TT. SMPTE-TT has also become the “safe harbour” format for the US legislation on captioning as decided by the FCC. (Note that the FCC requirements for captions on the Web are actually based on a list of features rather than requiring a specific format. But that will be the topic of a different blog post…)

    WebVTT is much younger than TTML. TTML was developed as an interchange format among caption authoring systems. WebVTT was built for rendering in Web browsers and with HTML5 in mind. It meets the requirements of the <track> element and supports more than just captions/subtitles. WebVTT is popular with browser developers and has already been implemented in all major browsers (Firefox Nightly is the last to implement it – all others have support already released).

    As we can see and as has been proven by the HTML spec and multiple other specs : browsers don’t wait for specifications to have W3C Recommendation status before they implement them. Nor do they really care about the status of a spec – what they care about is whether a spec makes sense for the Web developer and user communities and whether it fits in the Web platform. WebVTT has obviously achieved this status, even with an evolving spec. (Note that the spec tries very hard not to break backwards compatibility, thus all past implementations will at least be compatible with the more basic features of the spec.)

    Given that Web browsers don’t need WebVTT to become a W3C standard, why then should we spend effort in moving the spec through the W3C process to become a W3C Recommendation ?

    The modern Web is now much bigger than just Web browsers. Web specifications are being used in all kinds of devices including TV set-top boxes, phone and tablet apps, and even unexpected devices such as white goods. Videos are increasingly omnipresent thus exposing deaf and hard-of-hearing users to ever-growing challenges in interacting with content on diverse devices. Some of these devices will not use auto-updating software but fixed versions so can’t easily adapt to new features. Thus, caption producers (both commercial and community) need to be able to author captions (and other video accessibility content as defined by the HTML5 element) towards a feature set that is clearly defined to be supported by such non-updating devices.

    Understandably, device vendors in this space have a need to build their technology on standardised specifications. SDOs for such device technologies like to reference fixed specifications so the feature set is not continually updating. To reference WebVTT, they could use a snapshot of the specification at any time and reference that, but that’s not how SDOs work. They prefer referencing an officially sanctioned and tested version of a specification – for a W3C specification that means creating a W3C Recommendation of the WebVTT spec.

    Taking WebVTT on a W3C recommendation track is actually advantageous for browsers, too, because a test suite will have to be developed that proves that features are implemented in an interoperable manner. In summary, I can see the advantages and personally support the effort to take WebVTT through to a W3C Recommendation.

    Choice of Working Group

    FAIK this is the first time that a specification developed in a Community Group is being moved into the recommendation track. This is something that has been expected when the W3C created CGs, but not something that has an established process yet.

    The first question of course is which WG would take it through to Recommendation ? Would we create a new Working Group or find an existing one to move the specification through ? Since WGs involve a lot of overhead, the preference was to add WebVTT to the charter of an existing WG. The two obvious candidates were the HTML WG and the TT-WG – the first because it’s where WebVTT originated and the latter because it’s the closest thematically.

    Adding a deliverable to a WG is a major undertaking. The TT-WG is currently in the process of re-chartering and thus a suggestion was made to add WebVTT to the milestones of this WG. TBH that was not my first choice. Since I’m already an editor in the HTML WG and WebVTT is very closely related to HTML and can be tested extensively as part of HTML, I preferred the HTML WG. However, adding WebVTT to the TT-WG has some advantages, too.

    Since TTML is an exchange format, lots of captions that will be created (at least professionally) will be in TTML and TTML-related formats. It makes sense to create a mapping from TTML to WebVTT for rendering in browsers. The expertise of both, TTML and WebVTT experts is required to develop a good mapping – as has been shown when we developed the mapping from CEA608/708 to WebVTT. Also, captioning experts are already in the TT-WG, so it helps to get a second set of eyes onto WebVTT.

    A disadvantage of moving a specification out of a CG into a WG is, however, that you potentially lose a lot of the expertise that is already involved in the development of the spec. People don’t easily re-subscribe to additional mailing lists or want the additional complexity of involving another community (see e.g. this email).

    So, a good process needs to be developed to allow everyone to contribute to the spec in the best way possible without requiring duplicate work. How can we do that ?

    The forthcoming process

    At TPAC the TT-WG discussed for several hours what the next steps are in taking WebVTT through the TT-WG to recommendation status (agenda with slides). I won’t bore you with the different views – if you are keen, you can read the minutes.

    What I came away with is the following process :

    1. Fix a few more bugs in the CG until we’re happy with the feature set in the CG. This should match the feature set that we realistically expect devices to implement for a first version of the WebVTT spec.
    2. Make a FSA (Final Specification Agreement) in the CG to create a stable reference and a clean IPR position.
    3. Assuming that the TT-WG’s charter has been approved with WebVTT as a milestone, we would next bring the FSA specification into the TT-WG as FPWD (First Public Working Draft) and immediately do a Last Call which effectively freezes the feature set (this is possible because there has already been wide community review of the WebVTT spec) ; in parallel, the CG can continue to develop the next version of the WebVTT spec with new features (just like it is happening with the HTML5 and HTML5.1 specifications).
    4. Develop a test suite and address any issues in the Last Call document (of course, also fix these issues in the CG version of the spec).
    5. As per W3C process, substantive and minor changes to Last Call documents have to be reported and raised issues addressed before the spec can progress to the next level : Candidate Recommendation status.
    6. For the next step – Proposed Recommendation status – an implementation report is necessary, and thus the test suite needs to be finalized for the given feature set. The feature set may also be reduced at this stage to just the ones implemented interoperably, leaving any other features for the next version of the spec.
    7. The final step is Recommendation status, which simply requires sufficient support and endorsement by W3C members.

    The first version of the WebVTT spec naturally has a focus on captioning (and subtitling), since this has been the dominant use case that we have focused on this far and it’s the part that is the most compatibly implemented feature set of WebVTT in browsers. It’s my expectation that the next version of WebVTT will have a lot more features related to audio descriptions, chapters and metadata. Thus, this seems a good time for a first version feature freeze.

    There are still several obstacles towards progressing WebVTT as a milestone of the TT-WG. Apart from the need to get buy-in from the TT-WG, the TT-CG, and the AC (Adivisory Committee who have to approve the new charter), we’re also looking at the license of the specification document.

    The CG specification has an open license that allows creating derivative work as long as there is attribution, while the W3C document license for documents on the recommendation track does not allow the creation of derivative work unless given explicit exceptions. This is an issue that is currently being discussed in the W3C with a proposal for a CC-BY license on the Recommendation track. However, my view is that it’s probably ok to use the different document licenses : the TT-WG will work on WebVTT 1.0 and give it a W3C document license, while the CG starts working on the next WebVTT version under the open CG license. It probably actually makes sense to have a less open license on a frozen spec.

    Making the best of a complicated world

    WebVTT is now proposed as part of the recharter of the TT-WG. I have no idea how complicated the process will become to achieve a W3C WebVTT 1.0 Recommendation, but I am hoping that what is outlined above will be workable in such a way that all of us get to focus on progressing the technology.

    At TPAC I got the impression that the TT-WG is committed to progressing WebVTT to Recommendation status. I know that the TT-CG is committed to continue developing WebVTT to its full potential for all kinds of media-time aligned content with new kinds already discussed at FOMS. Let’s enable both groups to achieve their goals. As a consequence, we will allow the two formats to excel where they do : TTML as an interchange format and WebVTT as a browser rendering format.