Recherche avancée

Médias (0)

Mot : - Tags -/auteurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (53)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (7332)

  • What is Multi-Touch Attribution ? (And How To Get Started)

    2 février 2023, par Erin — Analytics Tips

    Good marketing thrives on data. Or more precisely — its interpretation. Using modern analytics software, we can determine which marketing actions steer prospects towards the desired action (a conversion event). 

    An attribution model in marketing is a set of rules that determine how various marketing tactics and channels impact the visitor’s progress towards a conversion. 

    Yet, as customer journeys become more complicated and involve multiple “touches”, standard marketing reports no longer tell the full picture. 

    That’s when multi-touch attribution analysis comes to the fore. 

    What is Multi-Touch Attribution ?

    Multi-touch attribution (also known as multi-channel attribution or cross-channel attribution) measures the impact of all touchpoints on the consumer journey on conversion. 

    Unlike single-touch reporting, multi-touch attribution models give credit to each marketing element — a social media ad, an on-site banner, an email link click, etc. By seeing impacts from every touchpoint and channel, marketers can avoid false assumptions or subpar budget allocations.

    To better understand the concept, let’s interpret the same customer journey using a standard single-touch report vs a multi-touch attribution model. 

    Picture this : Jammie is shopping around for a privacy-centred web analytics solution. She saw a recommendation on Twitter and ended up on the Matomo website. After browsing a few product pages and checking comparisons with other web analytics tools, she signs up for a webinar. One week after attending, Jammie is convinced that Matomo is the right tool for her business and goes directly to the Matomo website a starts a free trial. 

    • A standard single-touch report would attribute 100% of the conversion to direct traffic, which doesn’t give an accurate view of the multiple touchpoints that led Jammie to start a free trial. 
    • A multi-channel attribution report would showcase all the channels involved in the free trial conversion — social media, website content, the webinar, and then the direct traffic source.

    In other words : Multi-touch attribution helps you understand how prospects move through the sales funnel and which elements tinder them towards the desired outcome. 

    Types of Attribution Models

    As marketers, we know that multiple factors play into a conversion — channel type, timing, user’s stage on the buyer journey and so on. Various attribution models exist to reflect this variability. 

    Types of Attribution Models

    First Interaction attribution model (otherwise known as first touch) gives all credit for the conversion to the first channel (for example — a referral link) and doesn’t report on all the other interactions a user had with your company (e.g., clicked a newsletter link, engaged with a landing page, or browsed the blog campaign).

    First-touch helps optimise the top of your funnel and establish which channels bring the best leads. However, it doesn’t offer any insight into other factors that persuaded a user to convert. 

    Last Interaction attribution model (also known as last touch) allocates 100% credit to the last channel before conversion — be it direct traffic, paid ad, or an internal product page.

    The data is useful for optimising the bottom-of-the-funnel (BoFU) elements. But you have no visibility into assisted conversions — interactions a user had prior to conversion. 

    Last Non-Direct attribution model model excludes direct traffic and assigns 100% credit for a conversion to the last channel a user interacted with before converting. For instance, a social media post will receive 100% of credit if a shopper buys a product three days later. 

    This model is more telling about the other channels, involved in the sales process. Yet, you’re seeing only one step backwards, which may not be sufficient for companies with longer sales cycles.

    Linear attribution model distributes an equal credit for a conversion between all tracked touchpoints.

    For instance, with a four touchpoint conversion (e.g., an organic visit, then a direct visit, then a social visit, then a visit and conversion from an ad campaign) each touchpoint would receive 25% credit for that single conversion.

    This is the simplest multi-channel attribution modelling technique many tools support. The nuance is that linear models don’t reflect the true impact of various events. After all, a paid ad that introduced your brand to the shopper and a time-sensitive discount code at the checkout page probably did more than the blog content a shopper browsed in between. 

    Position Based attribution model allocates a 40% credit to the first and the last touchpoints and then spreads the remaining 20% across the touchpoints between the first and last. 

    This attribution model comes in handy for optimising conversions across the top and the bottom of the funnel. But it doesn’t provide much insight into the middle, which can skew your decision-making. For instance, you may overlook cases when a shopper landed via a social media post, then was re-engaged via email, and proceeded to checkout after an organic visit. Without email marketing, that sale may not have happened.

    Time decay attribution model adjusts the credit, based on the timing of the interactions. Touchpoints that preceded the conversion get the highest score, while the first ones get less weight (e.g., 5%-5%-10%-15%-25%-30%).

    This multi-channel attribution model works great for tracking the bottom of the funnel, but it underestimates the impact of brand awareness campaigns or assisted conversions at mid-stage. 

    Why Use Multi-Touch Attribution Modelling

    Multi-touch attribution provides you with the full picture of your funnel. With accurate data across all touchpoints, you can employ targeted conversion rate optimisation (CRO) strategies to maximise the impact of each campaign. 

    Most marketers and analysts prefer using multi-touch attribution modelling — and for some good reasons.

    Issues multi-touch attribution solves 

    • Funnel visibility. Understand which tactics play an important role at the top, middle and bottom of your funnel, instead of second-guessing what’s working or not. 
    • Budget allocations. Spend money on channels and tactics that bring a positive return on investment (ROI). 
    • Assisted conversions. Learn how different elements and touchpoints cumulatively contribute to the ultimate goal — a conversion event — to optimise accordingly. 
    • Channel segmentation. Determine which assets drive the most qualified and engaged leads to replicate them at scale.
    • Campaign benchmarking. Compare how different marketing activities from affiliate marketing to social media perform against the same metrics.

    How To Get Started With Multi-Touch Attribution 

    To make multi-touch attribution part of your analytics setup, follow the next steps :

    1. Define Your Marketing Objectives 

    Multi-touch attribution helps you better understand what led people to convert on your site. But to capture that, you need to first map the standard purchase journeys, which include a series of touchpoints — instances, when a prospect forms an opinion about your business.

    Touchpoints include :

    • On-site interactions (e.g., reading a blog post, browsing product pages, using an on-site calculator, etc.)
    • Off-site interactions (e.g., reading a review, clicking a social media link, interacting with an ad, etc.)

    Combined these interactions make up your sales funnel — a designated path you’ve set up to lead people toward the desired action (aka a conversion). 

    Depending on your business model, you can count any of the following as a conversion :

    • Purchase 
    • Account registration 
    • Free trial request 
    • Contact form submission 
    • Online reservation 
    • Demo call request 
    • Newsletter subscription

    So your first task is to create a set of conversion objectives for your business and add them as Goals or Conversions in your web analytics solution. Then brainstorm how various touchpoints contribute to these objectives. 

    Web analytics tools with multi-channel attribution, like Matomo, allow you to obtain an extra dimension of data on touchpoints via Tracked Events. Using Event Tracking, you can analyse how many people started doing a desired action (e.g., typing details into the form) but never completed the task. This way you can quickly identify “leaking” touchpoints in your funnel and fix them. 

    2. Select an Attribution Model 

    Multi-attribution models have inherent tradeoffs. Linear attribution model doesn’t always represent the role and importance of each channel. Position-based attribution model emphasises the role of the last and first channel while diminishing the importance of assisted conversions. Time-decay model, on the contrary, downplays the role awareness-related campaigns played.

    To select the right attribution model for your business consider your objectives. Is it more important for you to understand your best top of funnel channels to optimise customer acquisition costs (CAC) ? Or would you rather maximise your on-site conversion rates ? 

    Your industry and the average cycle length should also guide your choice. Position-based models can work best for eCommerce and SaaS businesses where both CAC and on-site conversion rates play an important role. Manufacturing companies or educational services providers, on the contrary, will benefit more from a time-decay model as it better represents the lengthy sales cycles. 

    3. Collect and Organise Data From All Touchpoints 

    Multi-touch attribution models are based on available funnel data. So to get started, you will need to determine which data sources you have and how to best leverage them for attribution modelling. 

    Types of data you should collect : 

    • General web analytics data : Insights on visitors’ on-site actions — visited pages, clicked links, form submissions and more.
    • Goals (Conversions) : Reports on successful conversions across different types of assets. 
    • Behavioural user data : Some tools also offer advanced features such as heatmaps, session recording and A/B tests. These too provide ample data into user behaviours, which you can use to map and optimise various touchpoints.

    You can also implement extra tracking, for instance for contact form submissions, live chat contacts or email marketing campaigns to identify repeat users in your system. Just remember to stay on the good side of data protection laws and respect your visitors’ privacy. 

    Separately, you can obtain top-of-the-funnel data by analysing referral traffic sources (channel, campaign type, used keyword, etc). A Tag Manager comes in handy as it allows you to zoom in on particular assets (e.g., a newsletter, an affiliate, a social campaign, etc). 

    Combined, these data points can be parsed by an app, supporting multi-touch attribution (or a custom algorithm) and reported back to you as specific findings. 

    Sounds easy, right ? Well, the devil is in the details. Getting ample, accurate data for multi-touch attribution modelling isn’t easy. 

    Marketing analytics has an accuracy problem, mainly for two reasons :

    • Cookie consent banner rejection 
    • Data sampling application

    Please note that we are not able to provide legal advice, so it’s important that you consult with your own DPO to ensure compliance with all relevant laws and regulations.

    If you’re collecting web analytics in the EU, you know that showing a cookie consent banner is a GDPR must-do. But many consumers don’t often rush to accept cookie consent banners. The average consent rate for cookies in 2021 stood at 54% in Italy, 45% in France, and 44% in Germany. The consent rates are likely lower in 2023, as Google was forced to roll out a “reject all” button for cookie tracking in Europe, while privacy organisations lodge complaints against individual businesses for deceptive banners. 

    For marketers, cookie rejection means substantial gaps in analytics data. The good news is that you can fill in those gaps by using a privacy-centred web analytics tool like Matomo. 

    Matomo takes extra safeguards to protect user privacy and supports fully cookieless tracking. Because of that, Matomo is legally exempt from tracking consent in France. Plus, you can configure to use our analytics tool without consent banners in other markets outside of Germany and the UK. This way you get to retain the data you need for audience modelling without breaching any privacy regulations. 

    Data sampling application partially stems from the above. When a web analytics or multi-channel attribution tool cannot secure first-hand data, the “guessing game” begins. Google Analytics, as well as other tools, often rely on synthetic AI-generated data to fill in the reporting gaps. Respectively, your multi-attribution model doesn’t depict the real state of affairs. Instead, it shows AI-produced guesstimates of what transpired whenever not enough real-world evidence is available.

    4. Evaluate and Select an Attribution Tool 

    Google Analytics (GA) offers several multi-touch attribution models for free (linear, time-decay and position-based). The disadvantage of GA multi-touch attribution is its lower accuracy due to cookie rejection and data sampling application.

    At the same time, you cannot create custom credit allocations for the proposed models, unless you have the paid version of GA, Google Analytics 360. This version of GA comes with a custom Attribution Modeling Tool (AMT). The price tag, however, starts at USD $50,000 per year. 

    Matomo Cloud offers multi-channel conversion attribution as a feature and it is available as a plug-in on the marketplace for Matomo On-Premise. We support linear, position-based, first-interaction, last-interaction, last non-direct and time-decay modelling, based fully on first-hand data. You also get more precise insights because cookie consent isn’t an issue with us. 

    Most multi-channel attribution tools, like Google Analytics and Matomo, provide out-of-the-box multi-touch attribution models. But other tools, like Matomo On-Premise, also provide full access to raw data so you can develop your own multi-touch attribution models and do custom attribution analysis. The ability to create custom attribution analysis is particularly beneficial for data analysts or organisations with complex and unique buyer journeys. 

    Conclusion

    Ultimately, multi-channel attribution gives marketers greater visibility into the customer journey. By analysing multiple touchpoints, you can establish how various marketing efforts contribute to conversions. Then use this information to inform your promotional strategy, budget allocations and CRO efforts. 

    The key to benefiting the most from multi-touch attribution is accurate data. If your analytics solution isn’t telling you the full story, your multi-touch model won’t either. 

    Collect accurate visitor data for multi-touch attribution modelling with Matomo. Start your free 21-day trial now

  • Scale filter crashes with error when used from transcoding example

    27 juin 2017, par Vali

    I’ve modified a bit (just to compile in c++) this code example :
    https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/transcoding.c.

    What works : as is (null filter), a number of other filters like framerate, drawtext, ...

    What doesn’t work : scale filter when scaling down.

    I use the following syntax for scale ( I’ve tried many others also, same effect) :
    "scale=w=iw/2 :-1"

    The error is : "Input picture width (240) is greater than stride (128)" where the values for width and stride depend on the input.

    Misc environment info : windows, VS 2017, input example : rtsp ://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov

    Any clue as to what I’m doing wrong ?

    Thanks !


    EDITED to add working code sample


    #pragma comment(lib, "avcodec.lib")
    #pragma comment(lib, "avutil.lib")
    #pragma comment(lib, "avformat.lib")
    #pragma comment(lib, "avfilter.lib")

    /*
    * Copyright (c) 2010 Nicolas George
    * Copyright (c) 2011 Stefano Sabatini
    * Copyright (c) 2014 Andrey Utkin
    *
    **** EDITED 2017 for testing (see original here: https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/transcoding.c)
    *
    * Permission is hereby granted, free of charge, to any person obtaining a copy
    * of this software and associated documentation files (the "Software"), to deal
    * in the Software without restriction, including without limitation the rights
    * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
    * copies of the Software, and to permit persons to whom the Software is
    * furnished to do so, subject to the following conditions:
    *
    * The above copyright notice and this permission notice shall be included in
    * all copies or substantial portions of the Software.
    *
    * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
    * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
    * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
    * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
    * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
    * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
    * THE SOFTWARE.
    */

    /**
    * @file
    * API example for demuxing, decoding, filtering, encoding and muxing
    * @example transcoding.c
    */

    extern "C"
    {
       #include <libavcodec></libavcodec>avcodec.h>
       #include <libavformat></libavformat>avformat.h>
       #include <libavfilter></libavfilter>avfiltergraph.h>
       #include <libavfilter></libavfilter>buffersink.h>
       #include <libavfilter></libavfilter>buffersrc.h>
       #include <libavutil></libavutil>opt.h>
       #include <libavutil></libavutil>pixdesc.h>
    }


    static AVFormatContext *ifmt_ctx;
    static AVFormatContext *ofmt_ctx;
    typedef struct FilteringContext {
       AVFilterContext *buffersink_ctx;
       AVFilterContext *buffersrc_ctx;
       AVFilterGraph *filter_graph;
    } FilteringContext;
    static FilteringContext *filter_ctx;

    typedef struct StreamContext {
       AVCodecContext *dec_ctx;
       AVCodecContext *enc_ctx;
    } StreamContext;
    static StreamContext *stream_ctx;

    static int open_input_file(const char *filename, int&amp; videoStreamIndex)
    {
       int ret;
       unsigned int i;

       ifmt_ctx = NULL;
       if ((ret = avformat_open_input(&amp;ifmt_ctx, filename, NULL, NULL)) &lt; 0) {
           av_log(NULL, AV_LOG_ERROR, "Cannot open input file\n");
           return ret;
       }

       if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) &lt; 0) {
           av_log(NULL, AV_LOG_ERROR, "Cannot find stream information\n");
           return ret;
       }

       // Just need video
       videoStreamIndex = -1;
       for (unsigned int i = 0; i &lt; ifmt_ctx->nb_streams; i++)
       {
           if (ifmt_ctx->streams[i]->codecpar->codec_type != AVMEDIA_TYPE_VIDEO)
               continue;
           videoStreamIndex = i;
           break;
       }
       if (videoStreamIndex &lt; 0)
       {
           av_log(NULL, AV_LOG_ERROR, "Cannot find video stream\n");
           return videoStreamIndex;
       }


       stream_ctx = (StreamContext*)av_mallocz_array(ifmt_ctx->nb_streams, sizeof(*stream_ctx));
       if (!stream_ctx)
           return AVERROR(ENOMEM);

       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {

           // Just need video
           if (i != videoStreamIndex)
               continue;


           AVStream *stream = ifmt_ctx->streams[i];
           AVCodec *dec = avcodec_find_decoder(stream->codecpar->codec_id);
           AVCodecContext *codec_ctx;
           if (!dec) {
               av_log(NULL, AV_LOG_ERROR, "Failed to find decoder for stream #%u\n", i);
               return AVERROR_DECODER_NOT_FOUND;
           }
           codec_ctx = avcodec_alloc_context3(dec);
           if (!codec_ctx) {
               av_log(NULL, AV_LOG_ERROR, "Failed to allocate the decoder context for stream #%u\n", i);
               return AVERROR(ENOMEM);
           }
           ret = avcodec_parameters_to_context(codec_ctx, stream->codecpar);
           if (ret &lt; 0) {
               av_log(NULL, AV_LOG_ERROR, "Failed to copy decoder parameters to input decoder context "
                   "for stream #%u\n", i);
               return ret;
           }
           /* Reencode video &amp; audio and remux subtitles etc. */
           if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
               || codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
               if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO)
                   codec_ctx->framerate = av_guess_frame_rate(ifmt_ctx, stream, NULL);
               /* Open decoder */
               ret = avcodec_open2(codec_ctx, dec, NULL);
               if (ret &lt; 0) {
                   av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);
                   return ret;
               }
           }
           stream_ctx[i].dec_ctx = codec_ctx;
       }

       av_dump_format(ifmt_ctx, 0, filename, 0);
       return 0;
    }

    static int open_output_file(const char *filename, const int videoStreamIndex)
    {
       AVStream *out_stream;
       AVStream *in_stream;
       AVCodecContext *dec_ctx, *enc_ctx;
       AVCodec *encoder;
       int ret;
       unsigned int i;

       ofmt_ctx = NULL;
       avformat_alloc_output_context2(&amp;ofmt_ctx, NULL, NULL, filename);
       if (!ofmt_ctx) {
           av_log(NULL, AV_LOG_ERROR, "Could not create output context\n");
           return AVERROR_UNKNOWN;
       }


       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
           // Just need video
           if (i != videoStreamIndex)
               continue;

           out_stream = avformat_new_stream(ofmt_ctx, NULL);
           if (!out_stream) {
               av_log(NULL, AV_LOG_ERROR, "Failed allocating output stream\n");
               return AVERROR_UNKNOWN;
           }

           in_stream = ifmt_ctx->streams[i];
           dec_ctx = stream_ctx[i].dec_ctx;

           if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
               /* in this example, we choose transcoding to same codec */
               encoder = avcodec_find_encoder(dec_ctx->codec_id);
               if (!encoder) {
                   av_log(NULL, AV_LOG_FATAL, "Necessary encoder not found\n");
                   return AVERROR_INVALIDDATA;
               }
               enc_ctx = avcodec_alloc_context3(encoder);
               if (!enc_ctx) {
                   av_log(NULL, AV_LOG_FATAL, "Failed to allocate the encoder context\n");
                   return AVERROR(ENOMEM);
               }

               /* In this example, we transcode to same properties (picture size,
               * sample rate etc.). These properties can be changed for output
               * streams easily using filters */
               enc_ctx->height = dec_ctx->height;
               enc_ctx->width = dec_ctx->width;
               enc_ctx->sample_aspect_ratio = dec_ctx->sample_aspect_ratio;
               /* take first format from list of supported formats */
               if (encoder->pix_fmts)
                   enc_ctx->pix_fmt = encoder->pix_fmts[0];
               else
                   enc_ctx->pix_fmt = dec_ctx->pix_fmt;

               /* video time_base can be set to whatever is handy and supported by encoder */
               //enc_ctx->time_base = av_inv_q(dec_ctx->framerate);
               enc_ctx->time_base = dec_ctx->time_base;


               /* Third parameter can be used to pass settings to encoder */
               ret = avcodec_open2(enc_ctx, encoder, NULL);
               if (ret &lt; 0) {
                   av_log(NULL, AV_LOG_ERROR, "Cannot open video encoder for stream #%u\n", i);
                   return ret;
               }
               ret = avcodec_parameters_from_context(out_stream->codecpar, enc_ctx);
               if (ret &lt; 0) {
                   av_log(NULL, AV_LOG_ERROR, "Failed to copy encoder parameters to output stream #%u\n", i);
                   return ret;
               }
               if (ofmt_ctx->oformat->flags &amp; AVFMT_GLOBALHEADER)
                   enc_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

               out_stream->time_base = enc_ctx->time_base;
               stream_ctx[i].enc_ctx = enc_ctx;
           }
           else if (dec_ctx->codec_type == AVMEDIA_TYPE_UNKNOWN) {
               av_log(NULL, AV_LOG_FATAL, "Elementary stream #%d is of unknown type, cannot proceed\n", i);
               return AVERROR_INVALIDDATA;
           }
           else {
               /* if this stream must be remuxed */
               ret = avcodec_parameters_copy(out_stream->codecpar, in_stream->codecpar);
               if (ret &lt; 0) {
                   av_log(NULL, AV_LOG_ERROR, "Copying parameters for stream #%u failed\n", i);
                   return ret;
               }
               out_stream->time_base = in_stream->time_base;
           }

       }
       av_dump_format(ofmt_ctx, 0, filename, 1);

       if (!(ofmt_ctx->oformat->flags &amp; AVFMT_NOFILE)) {
           ret = avio_open(&amp;ofmt_ctx->pb, filename, AVIO_FLAG_WRITE);
           if (ret &lt; 0) {
               av_log(NULL, AV_LOG_ERROR, "Could not open output file '%s'", filename);
               return ret;
           }
       }

       /* init muxer, write output file header */
       ret = avformat_write_header(ofmt_ctx, NULL);
       if (ret &lt; 0) {
           av_log(NULL, AV_LOG_ERROR, "Error occurred when opening output file\n");
           return ret;
       }

       return 0;
    }

    static int init_filter(FilteringContext* fctx, AVCodecContext *dec_ctx,
       AVCodecContext *enc_ctx, const char *filter_spec)
    {
       char args[512];
       int ret = 0;
       AVFilter *buffersrc = NULL;
       AVFilter *buffersink = NULL;
       AVFilterContext *buffersrc_ctx = NULL;
       AVFilterContext *buffersink_ctx = NULL;
       AVFilterInOut *outputs = avfilter_inout_alloc();
       AVFilterInOut *inputs = avfilter_inout_alloc();
       AVFilterGraph *filter_graph = avfilter_graph_alloc();

       if (!outputs || !inputs || !filter_graph) {
           ret = AVERROR(ENOMEM);
           goto end;
       }

       if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
           buffersrc = avfilter_get_by_name("buffer");
           buffersink = avfilter_get_by_name("buffersink");
           if (!buffersrc || !buffersink) {
               av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
               ret = AVERROR_UNKNOWN;
               goto end;
           }

           snprintf(args, sizeof(args),
               "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
               dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,
               dec_ctx->time_base.num, dec_ctx->time_base.den,
               dec_ctx->sample_aspect_ratio.num,
               dec_ctx->sample_aspect_ratio.den);

           ret = avfilter_graph_create_filter(&amp;buffersrc_ctx, buffersrc, "in",
               args, NULL, filter_graph);
           if (ret &lt; 0) {
               av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");
               goto end;
           }

           ret = avfilter_graph_create_filter(&amp;buffersink_ctx, buffersink, "out",
               NULL, NULL, filter_graph);
           if (ret &lt; 0) {
               av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");
               goto end;
           }

           ret = av_opt_set_bin(buffersink_ctx, "pix_fmts",
               (uint8_t*)&amp;enc_ctx->pix_fmt, sizeof(enc_ctx->pix_fmt),
               AV_OPT_SEARCH_CHILDREN);
           if (ret &lt; 0) {
               av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n");
               goto end;
           }
       }
       else {
           ret = AVERROR_UNKNOWN;
           goto end;
       }

       /* Endpoints for the filter graph. */
       outputs->name = av_strdup("in");
       outputs->filter_ctx = buffersrc_ctx;
       outputs->pad_idx = 0;
       outputs->next = NULL;

       inputs->name = av_strdup("out");
       inputs->filter_ctx = buffersink_ctx;
       inputs->pad_idx = 0;
       inputs->next = NULL;

       if (!outputs->name || !inputs->name) {
           ret = AVERROR(ENOMEM);
           goto end;
       }

       if ((ret = avfilter_graph_parse_ptr(filter_graph, filter_spec,
           &amp;inputs, &amp;outputs, NULL)) &lt; 0)
           goto end;

       if ((ret = avfilter_graph_config(filter_graph, NULL)) &lt; 0)
           goto end;

       /* Fill FilteringContext */
       fctx->buffersrc_ctx = buffersrc_ctx;
       fctx->buffersink_ctx = buffersink_ctx;
       fctx->filter_graph = filter_graph;

    end:
       avfilter_inout_free(&amp;inputs);
       avfilter_inout_free(&amp;outputs);

       return ret;
    }

    static int init_filters(const int videoStreamIndex)
    {
       const char *filter_spec;
       unsigned int i;
       int ret;
       filter_ctx = (FilteringContext*)av_malloc_array(ifmt_ctx->nb_streams, sizeof(*filter_ctx));
       if (!filter_ctx)
           return AVERROR(ENOMEM);

       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {

           // Just video
           if (i != videoStreamIndex)
               continue;

           filter_ctx[i].buffersrc_ctx = NULL;
           filter_ctx[i].buffersink_ctx = NULL;
           filter_ctx[i].filter_graph = NULL;
           if (!(ifmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO
               || ifmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO))
               continue;

           filter_spec = "null"; /* passthrough (dummy) filter for video */
           //filter_spec = "scale=w=iw/2:-1";
           // filter_spec = "drawtext=fontfile=FreeSerif.ttf: text='%{localtime}': x=w-text_w: y=0: fontsize=24: fontcolor=yellow@1.0: box=1: boxcolor=red@1.0";
           // filter_spec = "drawtext=fontfile=FreeSerif.ttf :text='test': x=w-text_w: y=text_h: fontsize=24: fontcolor=yellow@1.0: box=1: boxcolor=red@1.0";

           ret = init_filter(&amp;filter_ctx[i], stream_ctx[i].dec_ctx,
               stream_ctx[i].enc_ctx, filter_spec);
           if (ret)
               return ret;
       }
       return 0;
    }

    static int encode_write_frame(AVFrame *filt_frame, unsigned int stream_index, int *got_frame, const int videoStreamIndex) {

       // Just video
       if (stream_index != videoStreamIndex)
           return 0;

       int ret;
       int got_frame_local;
       AVPacket enc_pkt;
       int(*enc_func)(AVCodecContext *, AVPacket *, const AVFrame *, int *) =
           (ifmt_ctx->streams[stream_index]->codecpar->codec_type ==
               AVMEDIA_TYPE_VIDEO) ? avcodec_encode_video2 : avcodec_encode_audio2;

       if (!got_frame)
           got_frame = &amp;got_frame_local;

       // av_log(NULL, AV_LOG_INFO, "Encoding frame\n");
       /* encode filtered frame */
       enc_pkt.data = NULL;
       enc_pkt.size = 0;
       av_init_packet(&amp;enc_pkt);

       ret = enc_func(stream_ctx[stream_index].enc_ctx, &amp;enc_pkt,
           filt_frame, got_frame);

       av_frame_free(&amp;filt_frame);
       if (ret &lt; 0)
           return ret;
       if (!(*got_frame))
           return 0;

       /* prepare packet for muxing */
       /*enc_pkt.stream_index = stream_index;
       av_packet_rescale_ts(&amp;enc_pkt, stream_ctx[stream_index].enc_ctx->time_base, ofmt_ctx->streams[stream_index]->time_base);*/
       enc_pkt.stream_index = 0;
       av_packet_rescale_ts(&amp;enc_pkt, stream_ctx[stream_index].enc_ctx->time_base, ofmt_ctx->streams[0]->time_base);

       av_log(NULL, AV_LOG_DEBUG, "Muxing frame\n");
       /* mux encoded frame */
       ret = av_interleaved_write_frame(ofmt_ctx, &amp;enc_pkt);
       return ret;
    }

    static int filter_encode_write_frame(AVFrame *frame, unsigned int stream_index, const int videoStreamIndex)
    {
       // Just video, all else crashes
       if (stream_index != videoStreamIndex)
           return 0;

       int ret;
       AVFrame *filt_frame;

       // av_log(NULL, AV_LOG_INFO, "Pushing decoded frame to filters\n");
       /* push the decoded frame into the filtergraph */
       ret = av_buffersrc_add_frame_flags(filter_ctx[stream_index].buffersrc_ctx,
           frame, 0);
       if (ret &lt; 0) {
           av_log(NULL, AV_LOG_ERROR, "Error while feeding the filtergraph\n");
           return ret;
       }

       /* pull filtered frames from the filtergraph */
       while (1) {
           filt_frame = av_frame_alloc();
           if (!filt_frame) {
               ret = AVERROR(ENOMEM);
               break;
           }
           // av_log(NULL, AV_LOG_INFO, "Pulling filtered frame from filters\n");
           ret = av_buffersink_get_frame(filter_ctx[stream_index].buffersink_ctx,
               filt_frame);
           if (ret &lt; 0) {
               /* if no more frames for output - returns AVERROR(EAGAIN)
               * if flushed and no more frames for output - returns AVERROR_EOF
               * rewrite retcode to 0 to show it as normal procedure completion
               */
               if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
                   ret = 0;
               av_frame_free(&amp;filt_frame);
               break;
           }

           filt_frame->pict_type = AV_PICTURE_TYPE_NONE;
           ret = encode_write_frame(filt_frame, stream_index, NULL, videoStreamIndex);
           if (ret &lt; 0)
               break;
       }

       return ret;
    }

    static int flush_encoder(unsigned int stream_index, const int videoStreamIndex)
    {
       int ret;
       int got_frame;

       // Just video
       if (stream_index != videoStreamIndex)
           return 0;

       if (!(stream_ctx[stream_index].enc_ctx->codec->capabilities &amp;
           AV_CODEC_CAP_DELAY))
           return 0;

       while (1) {
           av_log(NULL, AV_LOG_INFO, "Flushing stream #%u encoder\n", stream_index);
           ret = encode_write_frame(NULL, stream_index, &amp;got_frame, videoStreamIndex);
           if (ret &lt; 0)
               break;
           if (!got_frame)
               return 0;
       }
       return ret;
    }


    #include <vector>

    int main(int argc, char **argv)
    {
       int ret;

       AVPacket packet;
       packet.data = NULL;
       packet.size = 0;

       AVFrame *frame = NULL;
       enum AVMediaType type;
       unsigned int stream_index;
       unsigned int i;
       int got_frame;
       int(*dec_func)(AVCodecContext *, AVFrame *, int *, const AVPacket *);


    #ifdef _DEBUG
       // Hardcoded arguments
       std::vector varguments;
       {
           varguments.push_back(argv[0]);

           // Source
           varguments.push_back("./big_buck_bunny_short.mp4 ");

           // Destination
           varguments.push_back("./big_buck_bunny_short-processed.mp4");
       }

       char** arguments = new char*[varguments.size()];
       for (unsigned int i = 0; i &lt; varguments.size(); i++)
       {
           arguments[i] = varguments[i];
       }
       argc = varguments.size();
       argv = arguments;
    #endif // _DEBUG


       if (argc != 3) {
           av_log(NULL, AV_LOG_ERROR, "Usage: %s <input file="file" /> <output file="file">\n", argv[0]);
           return 1;
       }

       av_register_all();
       avfilter_register_all();

       int videoStreamIndex = -1;
       if ((ret = open_input_file(argv[1], videoStreamIndex)) &lt; 0)
           goto end;
       if ((ret = open_output_file(argv[2], videoStreamIndex)) &lt; 0)
           goto end;
       if ((ret = init_filters(videoStreamIndex)) &lt; 0)
           goto end;

       // Stop after a couple of frames
       int framesToGet = 100;

       /* read all packets */
       //while (framesToGet--)
       while(1)
       {
           if ((ret = av_read_frame(ifmt_ctx, &amp;packet)) &lt; 0)
               break;
           stream_index = packet.stream_index;

           // I just need video
           if (stream_index != videoStreamIndex) {
               av_packet_unref(&amp;packet);
               continue;
           }

           type = ifmt_ctx->streams[packet.stream_index]->codecpar->codec_type;
           av_log(NULL, AV_LOG_DEBUG, "Demuxer gave frame of stream_index %u\n",
               stream_index);

           if (filter_ctx[stream_index].filter_graph) {
               av_log(NULL, AV_LOG_DEBUG, "Going to reencode&amp;filter the frame\n");
               frame = av_frame_alloc();
               if (!frame) {
                   ret = AVERROR(ENOMEM);
                   break;
               }
               av_packet_rescale_ts(&amp;packet,
                   ifmt_ctx->streams[stream_index]->time_base,
                   stream_ctx[stream_index].dec_ctx->time_base);
               dec_func = (type == AVMEDIA_TYPE_VIDEO) ? avcodec_decode_video2 :
                   avcodec_decode_audio4;
               ret = dec_func(stream_ctx[stream_index].dec_ctx, frame,
                   &amp;got_frame, &amp;packet);
               if (ret &lt; 0) {
                   av_frame_free(&amp;frame);
                   av_log(NULL, AV_LOG_ERROR, "Decoding failed\n");
                   break;
               }

               if (got_frame) {
                   frame->pts = frame->best_effort_timestamp;
                   ret = filter_encode_write_frame(frame, stream_index, videoStreamIndex);
                   av_frame_free(&amp;frame);
                   if (ret &lt; 0)
                       goto end;
               }
               else {
                   av_frame_free(&amp;frame);
               }
           }
           else {
               /* remux this frame without reencoding */
               av_packet_rescale_ts(&amp;packet,
                   ifmt_ctx->streams[stream_index]->time_base,
                   ofmt_ctx->streams[stream_index]->time_base);

               ret = av_interleaved_write_frame(ofmt_ctx, &amp;packet);
               if (ret &lt; 0)
                   goto end;
           }
           av_packet_unref(&amp;packet);
       }

       /* flush filters and encoders */
       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
           /* flush filter */
           if (!filter_ctx[i].filter_graph)
               continue;
           ret = filter_encode_write_frame(NULL, i, videoStreamIndex);
           if (ret &lt; 0) {
               av_log(NULL, AV_LOG_ERROR, "Flushing filter failed\n");
               goto end;
           }

           /* flush encoder */
           ret = flush_encoder(i, videoStreamIndex);
           if (ret &lt; 0) {
               av_log(NULL, AV_LOG_ERROR, "Flushing encoder failed\n");
               goto end;
           }
       }

       av_write_trailer(ofmt_ctx);
    end:
       av_packet_unref(&amp;packet);
       av_frame_free(&amp;frame);
       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
           // Just video
           if (i != videoStreamIndex)
               continue;
           avcodec_free_context(&amp;stream_ctx[i].dec_ctx);
           if (ofmt_ctx &amp;&amp; ofmt_ctx->nb_streams > i &amp;&amp; ofmt_ctx->streams[i] &amp;&amp; stream_ctx[i].enc_ctx)
               avcodec_free_context(&amp;stream_ctx[i].enc_ctx);
           if (filter_ctx &amp;&amp; filter_ctx[i].filter_graph)
               avfilter_graph_free(&amp;filter_ctx[i].filter_graph);
       }
       av_free(filter_ctx);
       av_free(stream_ctx);
       avformat_close_input(&amp;ifmt_ctx);
       if (ofmt_ctx &amp;&amp; !(ofmt_ctx->oformat->flags &amp; AVFMT_NOFILE))
           avio_closep(&amp;ofmt_ctx->pb);
       avformat_free_context(ofmt_ctx);

       /*if (ret &lt; 0)
           av_log(NULL, AV_LOG_ERROR, "Error occurred: %s\n", av_err2str(ret));*/

       return ret ? 1 : 0;
    }
    </output></vector>
  • My journey to Coviu

    27 octobre 2015, par silvia

    My new startup just released our MVP – this is the story of what got me here.

    I love creating new applications that let people do their work better or in a manner that wasn’t possible before.

    German building and loan socityMy first such passion was as a student intern when I built a system for a building and loan association’s monthly customer magazine. The group I worked with was managing their advertiser contacts through a set of paper cards and I wrote a dBase based system (yes, that long ago) that would manage their customer relationships. They loved it – until it got replaced by an SAP system that cost 100 times what I cost them, had really poor UX, and only gave them half the functionality. It was a corporate system with ongoing support, which made all the difference to them.

    Dr Scholz und Partner GmbHThe story repeated itself with a CRM for my Uncle’s construction company, and with a resume and quotation management system for Accenture right after Uni, both of which I left behind when I decided to go into research.

    Even as a PhD student, I never lost sight of challenges that people were facing and wanted to develop technology to overcome problems. The aim of my PhD thesis was to prepare for the oncoming onslaught of audio and video on the Internet (yes, this was 1994 !) by developing algorithms to automatically extract and locate information in such files, which would enable users to structure, index and search such content.

    Many of the use cases that we explored are now part of products or continue to be challenges : finding music that matches your preferences, identifying music or video pieces e.g. to count ads on the radio or to mark copyright infringement, or the automated creation of video summaries such as trailers.

    CSIRO

    This continued when I joined the CSIRO in Australia – I was working on segmenting speech into words or talk spurts since that would simplify captioning & subtitling, and on MPEG-7 which was a (slightly over-engineered) standard to structure metadata about audio and video.

    In 2001 I had the idea of replicating the Web for videos : i.e. creating hyperlinked and searchable video-only experiences. We called it “Annodex” for annotated and indexed video and it needed full-screen hyperlinked video in browsers – man were we ahead of our time ! It was my first step into standards, got several IETF RFCs to my name, and started my involvement with open codecs through Xiph.

    vquence logoAround the time that YouTube was founded in 2006, I founded Vquence – originally a video search company for the Web, but pivoted to a video metadata mining company. Vquence still exists and continues to sell its data to channel partners, but it lacks the user impact that has always driven my work.

    As the video element started being developed for HTML5, I had to get involved. I contributed many use cases to the W3C, became a co-editor of the HTML5 spec and focused on video captioning with WebVTT while contracting to Mozilla and later to Google. We made huge progress and today the technology exists to publish video on the Web with captions, making the Web more inclusive for everybody. I contributed code to YouTube and Google Chrome, but was keen to make a bigger impact again.

    NICTA logoThe opportunity came when a couple of former CSIRO colleagues who now worked for NICTA approached me to get me interested in addressing new use cases for video conferencing in the context of WebRTC. We worked on a kiosk-style solution to service delivery for large service organisations, particularly targeting government. The emerging WebRTC standard posed many technical challenges that we addressed by building rtc.io , by contributing to the standards, and registering bugs on the browsers.

    Fast-forward through the development of a few further custom solutions for customers in health and education and we are starting to see patterns of need emerge. The core learning that we’ve come away with is that to get things done, you have to go beyond “talking heads” in a video call. It’s not just about seeing the other person, but much more about having a shared view of the things that need to be worked on and a shared way of interacting with them. Also, we learnt that the things that are being worked on are quite varied and may include multiple input cameras, digital documents, Web pages, applications, device data, controls, forms.

    Coviu logoSo we set out to build a solution that would enable productive remote collaboration to take place. It would need to provide an excellent user experience, it would need to be simple to work with, provide for the standard use cases out of the box, yet be architected to be extensible for specialised data sharing needs that we knew some of our customers had. It would need to be usable directly on Coviu.com, but also able to integrate with specialised applications that some of our customers were already using, such as the applications that they spend most of their time in (CRMs, practice management systems, learning management systems, team chat systems). It would need to require our customers to sign up, yet their clients to join a call without sign-up.

    Collaboration is a big problem. People are continuing to get more comfortable with technology and are less and less inclined to travel distances just to get a service done. In a country as large as Australia, where 12% of the population lives in rural and remote areas, people may not even be able to travel distances, particularly to receive or provide recurring or specialised services, or to achieve work/life balance. To make the world a global village, we need to be able to work together better remotely.

    The need for collaboration is being recognised by specialised Web applications already, such as the LiveShare feature of Invision for Designers, Codassium for pair programming, or the recently announced Dropbox Paper. Few go all the way to video – WebRTC is still regarded as a complicated feature to support.

    Coviu in action

    With Coviu, we’d like to offer a collaboration feature to every Web app. We now have a Web app that provides a modern and beautifully designed collaboration interface. To enable other Web apps to integrate it, we are now developing an API. Integration may entail customisation of the data sharing part of Coviu – something Coviu has been designed for. How to replicate the data and keep it consistent when people collaborate remotely – that is where Coviu makes a difference.

    We have started our journey and have just launched free signup to the Coviu base product, which allows individuals to own their own “room” (i.e. a fixed URL) in which to collaborate with others. A huge shout out goes to everyone in the Coviu team – a pretty amazing group of people – who have turned the app from an idea to reality. You are all awesome !

    With Coviu you can share and annotate :

    • images (show your mum photos of your last holidays, or get feedback on an architecture diagram from a customer),
    • pdf files (give a presentation remotely, or walk a customer through a contract),
    • whiteboards (brainstorm with a colleague), and
    • share an application window (watch a YouTube video together, or work through your task list with your colleagues).

    All of these are regarded as “shared documents” in Coviu and thus have zooming and annotations features and are listed in a document tray for ease of navigation.

    This is just the beginning of how we want to make working together online more productive. Give it a go and let us know what you think.

    http://coviu.com/

    The post My journey to Coviu first appeared on ginger’s thoughts.