Recherche avancée

Médias (1)

Mot : - Tags -/lev manovitch

Autres articles (43)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

Sur d’autres sites (6039)

  • Trying to cancel execution and delete file using ffmpeg C API

    6 mars 2020, par Vuwox

    The code below is a class that handle the conversion of multiples images, through add_frame() method, into a GIF with encode(). It also use a filter to generate and apply the palette. The usage is like this :

    Code call example

    std::unique_ptr gif_obj = nullptr;
    try
    {
       gif_obj = std::make_unique({1000,1000}, 12, "C:/out.gif",
                 "format=pix_fmts=rgb24,split [a][b];[a]palettegen[p];[b][p]paletteuse");

       // Example: a simple vector of images (usually process internally)
       for(auto img : image_vector)
            gif_obj->add_frame(img);

       // Once all frame were added, encode the final GIF with the filter applied.
       gif_obj->encode();
    }
    catch(const std::exception& e)
    {
       // An error occured! We must close FFMPEG properly and delete the created file.
       gif_obj->cancel();
    }

    I have the following issue. If the code for any reason throw an exception, I call ffmpeg->cancel() and it supposes to delete the GIF file on disk. But this is never working, I assume there is a lock on the file or something like that. So here are my question :

    What is the proper way to close/free ffmpeg object in order to remove the file afterward ?


    Full class code below

    Header

    // C++ Standard includes    
    #include <memory>
    #include <string>
    #include <vector>


    // 3rd Party incldues
    #ifdef __cplusplus
    extern "C" {
    #include "libavformat/avformat.h"
    #include "libavfilter/avfilter.h"
    #include "libavutil/opt.h"
    #include "libavfilter/buffersrc.h"
    #include "libavfilter/buffersink.h"
    #include "libswscale/swscale.h"
    #include "libavutil/imgutils.h"
    }
    #endif

    #define FFMPEG_MSG_LEN 2000

    namespace px
    {
       namespace GIF
       {
           class FFMPEG
           {
           public:
               FFMPEG(const px::Point2D<int>&amp; dim,
                      const int framerate,
                      const std::string&amp; filename,
                      const std::string&amp; filter_cmd);

               ~FFMPEG();

               void add_frame(pxImage * const img);
               void encode();
               void cancel();

           private:

               void init_filters();            // Init everything that needed to filter the input frame.
               void init_muxer();              // The muxer that creates the output file.
               void muxing_one_frame(AVFrame* frame);
               void release();

               int _ret = 0;                   // status code from FFMPEG.
               char _err_msg[FFMPEG_MSG_LEN];  // Error message buffer.


               int m_width = 0;                // The width that all futur images must have to be accepted.
               int m_height = 0;               // The height that all futur images must have to be accepted.

               int m_framerate = 0;            // GIF Framerate.
               std::string m_filename = "";    // The GIF filename (on cache?)
               std::string m_filter_desc = ""; // The FFMPEG filter to apply over the frames.

               bool as_frame = false;

               AVFrame* picture_rgb24 = nullptr;           // Temporary frame that will hold the pxImage in an RGB24 format (NOTE: TOP-LEFT origin)

               AVFormatContext* ofmt_ctx = nullptr;        // ouput format context associated to the
               AVCodecContext* o_codec_ctx = nullptr;      // output codec for the GIF

               AVFilterGraph* filter_graph = nullptr;      // filter graph associate with the string we want to execute
               AVFilterContext* buffersrc_ctx = nullptr;   // The buffer that will store all the frames in one place for the palette generation.
               AVFilterContext* buffersink_ctx = nullptr;  // The buffer that will store the result afterward (once the palette are used).

               int64_t m_pts_increment = 0;
           };
       };
    };
    </int></vector></string></memory>

    ctor

    px::GIF::FFMPEG::FFMPEG(const px::Point2D<int>&amp; dim,
                           const int framerate,
                           const std::string&amp; filename,
                           const std::string&amp; filter_cmd) :
       m_width(dim.x()),
       m_height(dim.y()),
       m_framerate(framerate),
       m_filename(filename),
       m_filter_desc(filter_cmd)
    {
    #if !_DEBUG
       av_log_set_level(AV_LOG_QUIET); // Set the FFMPEG log to quiet to avoid too much logs.
    #endif

       // Allocate the temporary buffer that hold the ffmpeg image (pxImage to AVFrame conversion).
       picture_rgb24 = av_frame_alloc();
       picture_rgb24->pts = 0;
       picture_rgb24->data[0] = NULL;
       picture_rgb24->linesize[0] = -1;
       picture_rgb24->format = AV_PIX_FMT_RGB24;
       picture_rgb24->height = m_height;
       picture_rgb24->width = m_width;

       if ((_ret = av_image_alloc(picture_rgb24->data, picture_rgb24->linesize, m_width, m_height, (AVPixelFormat)picture_rgb24->format, 24)) &lt; 0)
           throw px::GIF::Error("Failed to allocate the AVFrame for pxImage conversion with error: " +
                                std::string(av_make_error_string(_err_msg, FFMPEG_MSG_LEN, _ret)),
                                "GIF::FFMPEG CTOR");  

       //printf("allocated picture of size %d, linesize %d %d %d %d\n", _ret, picture_rgb24->linesize[0], picture_rgb24->linesize[1], picture_rgb24->linesize[2], picture_rgb24->linesize[3]);

       init_muxer();   // Prepare the GIF encoder (open it on disk).
       init_filters(); // Prepare the filter that will be applied over the frame.

       // Instead of hardcoder {1,100} which is the GIF tbn, we collect it from its stream.
       // This will avoid future problem if the codec change in ffmpeg.
       if (ofmt_ctx &amp;&amp; ofmt_ctx->nb_streams > 0)
           m_pts_increment = av_rescale_q(1, { 1, m_framerate }, ofmt_ctx->streams[0]->time_base);
       else
           m_pts_increment = av_rescale_q(1, { 1, m_framerate }, { 1, 100 });
    }
    </int>

    FFMPEG Initialization (Filter and muxer)

    void px::GIF::FFMPEG::init_filters()
    {
       const AVFilter* buffersrc = avfilter_get_by_name("buffer");
       const AVFilter* buffersink = avfilter_get_by_name("buffersink");

       AVRational time_base = { 1, m_framerate };
       AVRational aspect_pixel = { 1, 1 };

       AVFilterInOut* inputs = avfilter_inout_alloc();
       AVFilterInOut* outputs = avfilter_inout_alloc();

       filter_graph = avfilter_graph_alloc();

       try
       {
           if (!outputs || !inputs || !filter_graph)
               throw px::GIF::Error("Failed to 'init_filters' could not allocated the graph/filters.", "GIF::FFMPEG init_filters");

           char args[512];
           snprintf(args, sizeof(args),
                    "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
                    m_width, m_height,
                    picture_rgb24->format,
                    time_base.num, time_base.den,
                    aspect_pixel.num, aspect_pixel.den);

           if (avfilter_graph_create_filter(&amp;buffersrc_ctx, buffersrc, "in", args, nullptr, filter_graph) &lt; 0)
               throw px::GIF::Error("Failed to create the 'source buffer' in init_filer method.", "GIF::FFMPEG init_filters");


           if (avfilter_graph_create_filter(&amp;buffersink_ctx, buffersink, "out", nullptr, nullptr, filter_graph) &lt; 0)
               throw px::GIF::Error("Failed to create the 'sink buffer' in init_filer method.", "GIF::FFMPEG init_filters");

           // GIF has possible output of PAL8.
           enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_PAL8, AV_PIX_FMT_NONE };

           if (av_opt_set_int_list(buffersink_ctx, "pix_fmts", pix_fmts, AV_PIX_FMT_NONE, AV_OPT_SEARCH_CHILDREN) &lt; 0)
               throw px::GIF::Error("Failed to set the output pixel format.", "GIF::FFMPEG init_filters");

           outputs->name = av_strdup("in");
           outputs->filter_ctx = buffersrc_ctx;
           outputs->pad_idx = 0;
           outputs->next = nullptr;

           inputs->name = av_strdup("out");
           inputs->filter_ctx = buffersink_ctx;
           inputs->pad_idx = 0;
           inputs->next = nullptr;

           // GIF has possible output of PAL8.
           if (avfilter_graph_parse_ptr(filter_graph, m_filter_desc.c_str(), &amp;inputs, &amp;outputs, nullptr) &lt; 0)
               throw px::GIF::Error("Failed to parse the filter graph (bad string!).", "GIF::FFMPEG init_filters");

           if (avfilter_graph_config(filter_graph, nullptr) &lt; 0)
               throw px::GIF::Error("Failed to configure the filter graph (bad string!).", "GIF::FFMPEG init_filters");

           avfilter_inout_free(&amp;inputs);
           avfilter_inout_free(&amp;outputs);
       }
       catch (const std::exception&amp; e)
       {
           // Catch exception to delete element.
           avfilter_inout_free(&amp;inputs);
           avfilter_inout_free(&amp;outputs);
           throw e; // re-throuw
       }
    }


    void px::GIF::FFMPEG::init_muxer()
    {
       AVOutputFormat* o_fmt = av_guess_format("gif", m_filename.c_str(), "video/gif");

       if ((_ret = avformat_alloc_output_context2(&amp;ofmt_ctx, o_fmt, "gif", m_filename.c_str())) &lt; 0)
           throw px::GIF::Error(std::string(av_make_error_string(_err_msg, FFMPEG_MSG_LEN, _ret)) + " allocate output format.", "GIF::FFMPEG init_muxer");

       AVCodec* codec = avcodec_find_encoder(AV_CODEC_ID_GIF);
       if (!codec) throw px::GIF::Error("Could to find the 'GIF' codec.", "GIF::FFMPEG init_muxer");

    #if 0
       const AVPixelFormat* p = codec->pix_fmts;
       while (p != NULL &amp;&amp; *p != AV_PIX_FMT_NONE) {
           printf("supported pix fmt: %s\n", av_get_pix_fmt_name(*p));
           ++p;
       }
    #endif

       AVStream* stream = avformat_new_stream(ofmt_ctx, codec);

       AVCodecParameters* codec_paramters = stream->codecpar;
       codec_paramters->codec_tag = 0;
       codec_paramters->codec_id = codec->id;
       codec_paramters->codec_type = AVMEDIA_TYPE_VIDEO;
       codec_paramters->width = m_width;
       codec_paramters->height = m_height;
       codec_paramters->format = AV_PIX_FMT_PAL8;

       o_codec_ctx = avcodec_alloc_context3(codec);
       avcodec_parameters_to_context(o_codec_ctx, codec_paramters);

       o_codec_ctx->time_base = { 1, m_framerate };

       if (ofmt_ctx->oformat->flags &amp; AVFMT_GLOBALHEADER)
           o_codec_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

       if ((_ret = avcodec_open2(o_codec_ctx, codec, NULL)) &lt; 0)
           throw px::GIF::Error(std::string(av_make_error_string(_err_msg, FFMPEG_MSG_LEN, _ret)) + " open output codec.", "GIF::FFMPEG init_muxer");

       if ((_ret = avio_open(&amp;ofmt_ctx->pb, m_filename.c_str(), AVIO_FLAG_WRITE)) &lt; 0)
           throw px::GIF::Error(std::string(av_make_error_string(_err_msg, FFMPEG_MSG_LEN, _ret)) + " avio open error.", "GIF::FFMPEG init_muxer");

       if ((_ret = avformat_write_header(ofmt_ctx, NULL)) &lt; 0)
           throw px::GIF::Error(std::string(av_make_error_string(_err_msg, FFMPEG_MSG_LEN, _ret)) + " write GIF header", "GIF::FFMPEG init_muxer");

    #if _DEBUG
       // This print the stream/output format.
       av_dump_format(ofmt_ctx, -1, m_filename.c_str(), 1);
    #endif
    }

    Add frame (usually in a loop)

    void px::GIF::FFMPEG::add_frame(pxImage * const img)
    {
       if (img->getImageType() != PXT_BYTE || img->getNChannels() != 4)
           throw px::GIF::Error("Failed to 'add_frame' since image is not PXT_BYTE and 4-channels.", "GIF::FFMPEG add_frame");

       if (img->getWidth() != m_width || img->getHeight() != m_height)
           throw px::GIF::Error("Failed to 'add_frame' since the size is not same to other inputs.", "GIF::FFMPEG add_frame");

       const int pitch = picture_rgb24->linesize[0];
       auto px_ptr = getImageAccessor(img);

       for (int y = 0; y &lt; m_height; y++)
       {
           const int px_row = img->getOrigin() == ORIGIN_BOT_LEFT ? m_height - y - 1 : y;
           for (int x = 0; x &lt; m_width; x++)
           {
               const int idx = y * pitch + 3 * x;
               picture_rgb24->data[0][idx] = px_ptr[px_row][x].ch[PX_RE];
               picture_rgb24->data[0][idx + 1] = px_ptr[px_row][x].ch[PX_GR];
               picture_rgb24->data[0][idx + 2] = px_ptr[px_row][x].ch[PX_BL];
           }
       }

       // palettegen need a whole stream, just add frame to buffer.
       if ((_ret = av_buffersrc_add_frame_flags(buffersrc_ctx, picture_rgb24, AV_BUFFERSRC_FLAG_KEEP_REF)) &lt; 0)
           throw px::GIF::Error("Failed to 'add_frame' to global buffer with error: " +
                                std::string(av_make_error_string(_err_msg, FFMPEG_MSG_LEN, _ret)),
                                "GIF::FFMPEG add_frame");

       // Increment the FPS of the picture for the next add-up to the buffer.      
       picture_rgb24->pts += m_pts_increment;

       as_frame = true;
    }    

    Encoder (final step)

    void px::GIF::FFMPEG::encode()
    {
       if (!as_frame)
           throw px::GIF::Error("Please 'add_frame' before running the Encoding().", "GIF::FFMPEG encode");

       // end of buffer
       if ((_ret = av_buffersrc_add_frame_flags(buffersrc_ctx, nullptr, AV_BUFFERSRC_FLAG_KEEP_REF)) &lt; 0)
           throw px::GIF::Error("error add frame to buffer source: " + std::string(av_make_error_string(_err_msg, FFMPEG_MSG_LEN, _ret)), "GIF::FFMPEG encode");

       do {
           AVFrame* filter_frame = av_frame_alloc();
           _ret = av_buffersink_get_frame(buffersink_ctx, filter_frame);
           if (_ret == AVERROR(EAGAIN) || _ret == AVERROR_EOF) {
               av_frame_unref(filter_frame);
               break;
           }

           // write the filter frame to output file
           muxing_one_frame(filter_frame);

           av_frame_unref(filter_frame);
       } while (_ret >= 0);

       av_write_trailer(ofmt_ctx);
    }

    void px::GIF::FFMPEG::muxing_one_frame(AVFrame* frame)
    {
       int ret = avcodec_send_frame(o_codec_ctx, frame);
       AVPacket *pkt = av_packet_alloc();
       av_init_packet(pkt);

       while (ret >= 0) {
           ret = avcodec_receive_packet(o_codec_ctx, pkt);
           if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
               break;
           }

           av_write_frame(ofmt_ctx, pkt);
       }
       av_packet_unref(pkt);
    }

    DTOR, Release and Cancel

    px::GIF::FFMPEG::~FFMPEG()
    {
       release();
    }


    void px::GIF::FFMPEG::release()
    {
       // Muxer stuffs
       if (ofmt_ctx != nullptr) avformat_free_context(ofmt_ctx);
       if (o_codec_ctx != nullptr) avcodec_close(o_codec_ctx);
       if (o_codec_ctx != nullptr) avcodec_free_context(&amp;o_codec_ctx);

       ofmt_ctx = nullptr;
       o_codec_ctx = nullptr;

       // Filter stuffs
       if (buffersrc_ctx != nullptr) avfilter_free(buffersrc_ctx);
       if (buffersink_ctx != nullptr) avfilter_free(buffersink_ctx);
       if (filter_graph != nullptr) avfilter_graph_free(&amp;filter_graph);

       buffersrc_ctx = nullptr;
       buffersink_ctx = nullptr;
       filter_graph = nullptr;

       // Conversion image.
       if (picture_rgb24 != nullptr) av_frame_free(&amp;picture_rgb24);
       picture_rgb24 = nullptr;
    }

    void px::GIF::FFMPEG::cancel()
    {
       // In-case of failure we must close ffmpeg and exit.
       av_write_trailer(ofmt_ctx);

       // Release and close all elements.
       release();

       // Delete the file on disk.
       if (remove(m_filename.c_str()) != 0)
           PX_LOG0(PX_LOGLEVEL_ERROR, "GIF::FFMPEG - On 'cancel' failed to remove the file.");
    }
  • Matomo analytics for wordpress

    15 octobre 2019, par Matomo Core Team — Community

    Self-hosting web analytics got a whole lot easier ! Introducing Matomo for WordPress

    Be the first to try it out ! Your feedback is much needed and appreciated

    Get a fully functioning Matomo (which is comparable to Google Analytics) in seconds ! How ? With the new Matomo Analytics for WordPress plugin. 

    Web analytics in WordPress has never been easier to get, or more powerful. Matomo Analytics for WordPress is the one-stop problem solver. It’ll save you time, money and give you the insights to help your website or business succeed. 

    Best of all, we get to further the goal of decentralising the internet. Our hope is for Matomo Analytics for WordPress to spread far and wide. We’re so excited that more and more people can now get their hands on this powerful, free, open-source analytics platform, in a few clicks !

    Download now and check it out !

    What do you get ?

    • No more signing up to third party analytics service (like Google)
    • No more sending away your valuable data to a third party service (like Google)
    • Easy setup – install with a few clicks, no tracking code installation or developer knowledge needed
    • 100% accurate data – no data sampling and no data limits 
    • Full data ownership – all data is stored on your servers and no one else can see your data
    • Privacy protection / GDPR compliance
    • Ecommerce tracking out-of-the-box (Woocommerce, Easy Digital Downloads, and MemberPress) and we’re keen to add many more over time
    • Powerful features – segmenting, comparing reports, different visualisations, real-time reports, visit logs and visitor profiles, Matomo Tag Manager, dashboards, data export, APIs, and many more
    • Compared to other WordPress solutions we don’t charge you extra for basic features that should work out-of-the-box
    • Just like Matomo On-Premise, Matomo Analytics for WordPress is free

    We need your feedback !

    We all know and love the versatility of WordPress – with over 55,000 plugins and all the different ways of hosting it. However, with this great versatility comes the potential for things to be missed, so we’re keen to hear your feedback.

    Thank you ! We really appreciate your help on this ❤️

    How do you get Matomo Analytics for WordPress ?

    Log in to your WordPress and go to “Plugins => Add New”, search for “Matomo Analytics – Ethical Stats. Powerful Insights”, click on “Install” and then “Activate”.

    All you need is at least WordPress 4.8 and PHP 7.0 or later. MySQL 5.1+ is recommended. 

    The source code is available at : https://github.com/matomo-org/wp-matomo/

    In perfect harmony : Matomo and WordPress

    Matomo Analytics for WordPress

    The idea for this started two years ago when we realised the similarities between the Matomo and WordPress project. 

    Not only from a technological point of view – where both are based on PHP and MySQL and can be extended using plugins – but also from a philosophical, license and values point of view. We both believe in privacy, security, data ownership, openness, transparency, having things working out-of-the-box, simplicity etc. 

    WordPress is currently used on approximately 30% of all websites. Many of them use the self-hosted open-source WordPress version. Giving everyone in this market the opportunity to easily get a powerful web analytics platform for free, means a lot to us. We believe WordPress users get a real choice besides the standard solution of Google Analytics, and it furthers our effort and goal of decentralising the internet. 

    We’re hoping more people will be empowered to protect user privacy, have access to a great free and open-source tool, and keep control of data in their own hands.

    We hope you feel the same. Help us spread the word to your friends and get them in on this awesome new project !

    Share on facebook
    Share on twitter
    Share on linkedin

    FAQs

    Isn’t there already a WP-Matomo plugin for WordPress available ?

    Yes, the existing WP-Matomo (WP-Piwik) plugin is an awesome plugin to connect your existing Matomo On-Premise or Matomo Cloud account with WordPress. The difference is that this new plugin installs Matomo Analytics fully in your WordPress. So you get the convenience of having a powerful analytics platform within your WordPress.

    We highly recommend you install this new plugin if you use WordPress and are not running Matomo yet. 

    If you are already using Matomo on our Cloud or On-Premise, we’d still highly recommend you use WP-Matomo (WP-Piwik). So that you get an easier way of inserting the tracking code into your WordPress site and get insights faster.

    I have a high traffic website, will it be an issue ?

    If you have a lot of traffic, we’d advise you to install Matomo On-Premise separately. There’s no specific traffic threshold we can give you on when it’s better to use Matomo On-Premise. It really depends on your server. 

    We reckon if you have more than 500,000 page views a month, you may want to think about using Matomo On-Premise with WP-Matomo instead, but this is just an estimate. In general, if the load on your server is already quite high, then it might be better to install Matomo on a separate server. See also recommended server sizing for running Matomo.

    How do I report a bug or request a new feature in Matomo for WordPress ?

    Please create an issue, on our repository whenever you find a bug or if you have any suggestion or ideas of improvement. We want to build an outstanding analytics experience for WordPress !

    Have another question you’re dying to ask ? The Matomo for WordPress FAQ page might have the answer you need. 

    Matomo Analytics for WordPress newsletter

    Get ahead of the crowd – signup to our exclusive Matomo for WordPress newsletter to get the latest updates on this exciting new project.

    &lt;script type=&quot;text/javascript&quot;&gt;<br />
    (function(global) {<br />
     function serialize(form){if(!form||form.nodeName!==&quot;FORM&quot;){return }var i,j,q=[];for(i=form.elements.length-1;i&gt;=0;i=i-1){if(form.elements[i].name===&quot;&quot;){continue}switch(form.elements[i].nodeName){case&quot;INPUT&quot;:switch(form.elements[i].type){case&quot;text&quot;:case&quot;hidden&quot;:case&quot;password&quot;:case&quot;button&quot;:case&quot;reset&quot;:case&quot;submit&quot;:q.push(form.elements[i].name+&quot;=&quot;+encodeURIComponent(form.elements[i].value));break;case&quot;checkbox&quot;:case&quot;radio&quot;:if(form.elements[i].checked){q.push(form.elements[i].name+&quot;=&quot;+encodeURIComponent(form.elements[i].value))}break;case&quot;file&quot;:break}break;case&quot;TEXTAREA&quot;:q.push(form.elements[i].name+&quot;=&quot;+encodeURIComponent(form.elements[i].value));break;case&quot;SELECT&quot;:switch(form.elements[i].type){case&quot;select-one&quot;:q.push(form.elements[i].name+&quot;=&quot;+encodeURIComponent(form.elements[i].value));break;case&quot;select-multiple&quot;:for(j=form.elements[i].options.length-1;j&gt;=0;j=j-1){if(form.elements[i].options[j].selected){q.push(form.elements[i].name+&quot;=&quot;+encodeURIComponent(form.elements[i].options[j].value))}}break}break;case&quot;BUTTON&quot;:switch(form.elements[i].type){case&quot;reset&quot;:case&quot;submit&quot;:case&quot;button&quot;:q.push(form.elements[i].name+&quot;=&quot;+encodeURIComponent(form.elements[i].value));break}break}}return q.join(&quot;&amp;&quot;)};<br />
    <br />
    <br />
     function extend(destination, source) {<br />
       for (var prop in source) {<br />
         destination[prop] = source[prop];<br />
       }<br />
     }<br />
    <br />
     if (!Mimi) var Mimi = {};<br />
     if (!Mimi.Signups) Mimi.Signups = {};<br />
    <br />
     Mimi.Signups.EmbedValidation = function() {<br />
       this.initialize();<br />
    <br />
       var _this = this;<br />
       if (document.addEventListener) {<br />
         this.form.addEventListener('submit', function(e){<br />
           _this.onFormSubmit(e);<br />
         });<br />
       } else {<br />
         this.form.attachEvent('onsubmit', function(e){<br />
           _this.onFormSubmit(e);<br />
         });<br />
       }<br />
     };<br />
    <br />
     extend(Mimi.Signups.EmbedValidation.prototype, {<br />
       initialize: function() {<br />
         this.form         = document.getElementById('ema_signup_form');<br />
         this.submit       = document.getElementById('webform_submit_button');<br />
         this.callbackName = 'jsonp_callback_' + Math.round(100000 * Math.random());<br />
         this.validEmail   = /.+@.+\..+/<br />
       },<br />
    <br />
       onFormSubmit: function(e) {<br />
         e.preventDefault();<br />
    <br />
         this.validate();<br />
         if (this.isValid) {<br />
           this.submitForm();<br />
         } else {<br />
           this.revalidateOnChange();<br />
         }<br />
       },<br />
    <br />
       validate: function() {<br />
         this.isValid = true;<br />
         this.emailValidation();<br />
         this.fieldAndListValidation();<br />
         this.updateFormAfterValidation();<br />
       },<br />
    <br />
       emailValidation: function() {<br />
         var email = document.getElementById('signup_email');<br />
    <br />
         if (this.validEmail.test(email.value)) {<br />
           this.removeTextFieldError(email);<br />
         } else {<br />
           this.textFieldError(email);<br />
           this.isValid = false;<br />
         }<br />
       },<br />
    <br />
       fieldAndListValidation: function() {<br />
         var fields = this.form.querySelectorAll('.mimi_field.required');<br />
    <br />
         for (var i = 0; i &lt; fields.length; ++i) {<br />
           var field = fields[i],<br />
               type  = this.fieldType(field);<br />
           if (type === 'checkboxes' || type === 'radio_buttons' || type === 'age_check') {<br />
             this.checkboxAndRadioValidation(field);<br />
           } else {<br />
             this.textAndDropdownValidation(field, type);<br />
           }<br />
         }<br />
       },<br />
    <br />
       fieldType: function(field) {<br />
         var type = field.querySelectorAll('.field_type');<br />
    <br />
         if (type.length) {<br />
           return type[0].getAttribute('data-field-type');<br />
         } else if (field.className.indexOf('checkgroup') &gt;= 0) {<br />
           return 'checkboxes';<br />
         } else {<br />
           return 'text_field';<br />
         }<br />
       },<br />
    <br />
       checkboxAndRadioValidation: function(field) {<br />
         var inputs   = field.getElementsByTagName('input'),<br />
             selected = false;<br />
    <br />
         for (var i = 0; i &lt; inputs.length; ++i) {<br />
           var input = inputs[i];<br />
           if((input.type === 'checkbox' || input.type === 'radio') &amp;&amp; input.checked) {<br />
             selected = true;<br />
           }<br />
         }<br />
    <br />
         if (selected) {<br />
           field.className = field.className.replace(/ invalid/g, '');<br />
         } else {<br />
           if (field.className.indexOf('invalid') === -1) {<br />
             field.className += ' invalid';<br />
           }<br />
    <br />
           this.isValid = false;<br />
         }<br />
       },<br />
    <br />
       textAndDropdownValidation: function(field, type) {<br />
         var inputs = field.getElementsByTagName('input');<br />
    <br />
         for (var i = 0; i &lt; inputs.length; ++i) {<br />
           var input = inputs[i];<br />
           if (input.name.indexOf('signup') &gt;= 0) {<br />
             if (type === 'text_field') {<br />
               this.textValidation(input);<br />
             } else {<br />
               this.dropdownValidation(field, input);<br />
             }<br />
           }<br />
         }<br />
         this.htmlEmbedDropdownValidation(field);<br />
       },<br />
    <br />
       textValidation: function(input) {<br />
         if (input.id === 'signup_email') return;<br />
    <br />
         if (input.value) {<br />
           this.removeTextFieldError(input);<br />
         } else {<br />
           this.textFieldError(input);<br />
           this.isValid = false;<br />
         }<br />
       },<br />
    <br />
       dropdownValidation: function(field, input) {<br />
         if (input.value) {<br />
           field.className = field.className.replace(/ invalid/g, '');<br />
         } else {<br />
           if (field.className.indexOf('invalid') === -1) field.className += ' invalid';<br />
           this.onSelectCallback(input);<br />
           this.isValid = false;<br />
         }<br />
       },<br />
    <br />
       htmlEmbedDropdownValidation: function(field) {<br />
         var dropdowns = field.querySelectorAll('.mimi_html_dropdown');<br />
         var _this = this;<br />
    <br />
         for (var i = 0; i &lt; dropdowns.length; ++i) {<br />
           var dropdown = dropdowns[i];<br />
    <br />
           if (dropdown.value) {<br />
             field.className = field.className.replace(/ invalid/g, '');<br />
           } else {<br />
             if (field.className.indexOf('invalid') === -1) field.className += ' invalid';<br />
             this.isValid = false;<br />
             dropdown.onchange = (function(){ _this.validate(); });<br />
           }<br />
         }<br />
       },<br />
    <br />
       textFieldError: function(input) {<br />
         input.className   = 'required invalid';<br />
         input.placeholder = input.getAttribute('data-required-field');<br />
       },<br />
    <br />
       removeTextFieldError: function(input) {<br />
         input.className   = 'required';<br />
         input.placeholder = '';<br />
       },<br />
    <br />
       onSelectCallback: function(input) {<br />
         if (typeof Widget === 'undefined' || !Widget.BasicDropdown) return;<br />
    <br />
         var dropdownEl = input.parentNode,<br />
             instances  = Widget.BasicDropdown.instances,<br />
             _this = this;<br />
    <br />
         for (var i = 0; i &lt; instances.length; ++i) {<br />
           var instance = instances[i];<br />
           if (instance.wrapperEl === dropdownEl) {<br />
             instance.onSelect = function(){ _this.validate() };<br />
           }<br />
         }<br />
       },<br />
    <br />
       updateFormAfterValidation: function() {<br />
         this.form.className   = this.setFormClassName();<br />
         this.submit.value     = this.submitButtonText();<br />
         this.submit.disabled  = !this.isValid;<br />
         this.submit.className = this.isValid ? 'submit' : 'disabled';<br />
       },<br />
    <br />
       setFormClassName: function() {<br />
         var name = this.form.className;<br />
    <br />
         if (this.isValid) {<br />
           return name.replace(/\s?mimi_invalid/, '');<br />
         } else {<br />
           if (name.indexOf('mimi_invalid') === -1) {<br />
             return name += ' mimi_invalid';<br />
           } else {<br />
             return name;<br />
           }<br />
         }<br />
       },<br />
    <br />
       submitButtonText: function() {<br />
         var invalidFields = document.querySelectorAll('.invalid'),<br />
             text;<br />
    <br />
         if (this.isValid || !invalidFields) {<br />
           text = this.submit.getAttribute('data-default-text');<br />
         } else {<br />
           if (invalidFields.length || invalidFields[0].className.indexOf('checkgroup') === -1) {<br />
             text = this.submit.getAttribute('data-invalid-text');<br />
           } else {<br />
             text = this.submit.getAttribute('data-choose-list');<br />
           }<br />
         }<br />
         return text;<br />
       },<br />
    <br />
       submitForm: function() {<br />
         this.formSubmitting();<br />
    <br />
         var _this = this;<br />
         window[this.callbackName] = function(response) {<br />
           delete window[this.callbackName];<br />
           document.body.removeChild(script);<br />
           _this.onSubmitCallback(response);<br />
         };<br />
    <br />
         var script = document.createElement('script');<br />
         script.src = this.formUrl('json');<br />
         document.body.appendChild(script);<br />
       },<br />
    <br />
       formUrl: function(format) {<br />
         var action  = this.form.action;<br />
         if (format === 'json') action += '.json';<br />
         return action + '?callback=' + this.callbackName + '&amp;' + serialize(this.form);<br />
       },<br />
    <br />
       formSubmitting: function() {<br />
         this.form.className  += ' mimi_submitting';<br />
         this.submit.value     = this.submit.getAttribute('data-submitting-text');<br />
         this.submit.disabled  = true;<br />
         this.submit.className = 'disabled';<br />
       },<br />
    <br />
       onSubmitCallback: function(response) {<br />
         if (response.success) {<br />
           this.onSubmitSuccess(response.result);<br />
         } else {<br />
           top.location.href = this.formUrl('html');<br />
         }<br />
       },<br />
    <br />
       onSubmitSuccess: function(result) {<br />
         if (result.has_redirect) {<br />
           top.location.href = result.redirect;<br />
         } else if(result.single_opt_in || !result.confirmation_html) {<br />
           this.disableForm();<br />
           this.updateSubmitButtonText(this.submit.getAttribute('data-thanks'));<br />
         } else {<br />
           this.showConfirmationText(result.confirmation_html);<br />
         }<br />
       },<br />
    <br />
       showConfirmationText: function(html) {<br />
         var fields = this.form.querySelectorAll('.mimi_field');<br />
    <br />
         for (var i = 0; i &lt; fields.length; ++i) {<br />
           fields[i].style['display'] = 'none';<br />
         }<br />
    <br />
         (this.form.querySelectorAll('fieldset')[0] || this.form).innerHTML = html;<br />
       },<br />
    <br />
       disableForm: function() {<br />
         var elements = this.form.elements;<br />
         for (var i = 0; i &lt; elements.length; ++i) {<br />
           elements[i].disabled = true;<br />
         }<br />
       },<br />
    <br />
       updateSubmitButtonText: function(text) {<br />
         this.submit.value = text;<br />
       },<br />
    <br />
       revalidateOnChange: function() {<br />
         var fields = this.form.querySelectorAll(&quot;.mimi_field.required&quot;),<br />
             _this = this;<br />
    <br />
         var onTextFieldChange = function() {<br />
           if (this.getAttribute('name') === 'signup[email]') {<br />
             if (_this.validEmail.test(this.value)) _this.validate();<br />
           } else {<br />
             if (this.value.length === 1) _this.validate();<br />
           }<br />
         }<br />
    <br />
         for (var i = 0; i &lt; fields.length; ++i) {<br />
           var inputs = fields[i].getElementsByTagName('input');<br />
           for (var j = 0; j &lt; inputs.length; ++j) {<br />
             if (this.fieldType(fields[i]) === 'text_field') {<br />
               inputs[j].onkeyup = onTextFieldChange;<br />
               inputs[j].onchange = onTextFieldChange; <br />
             } else {<br />
               inputs[j].onchange = function(){ _this.validate() };<br />
             }<br />
           }<br />
         }<br />
       }<br />
     });<br />
    <br />
     if (document.addEventListener) {<br />
       document.addEventListener(&quot;DOMContentLoaded&quot;, function() {<br />
         new Mimi.Signups.EmbedValidation();<br />
       });<br />
     }<br />
     else {<br />
       window.attachEvent('onload', function() {<br />
         new Mimi.Signups.EmbedValidation();<br />
       });<br />
     }<br />
    })(this);<br />
    &lt;/script&gt;
  • How to Read DJI H264 FPV Feed as OpenCV Mat Object ?

    29 mai 2019, par Walter Morawa

    TDLR : All DJI developers would benefit from decoding raw H264 video stream byte arrays to a format compatible with OpenCV.

    I’ve spent a lot of time looking for a solution to reading DJI’s FPV feed as an OpenCV Mat object. I am probably overlooking something fundamental, since I am not too familiar with Image Encoding/Decoding.

    Future developers who come across it will likely run into a bunch of the same issues I had. It would be great if DJI developers could use opencv directly without needing a 3rd party library.

    I’m willing to use ffmpeg or JavaCV if necessary, but that’s quite the hurdle for most Android developers as we’re going to have to use cpp, ndk, terminal for testing, etc. That seems like overkill. Both options seem quite time consuming. This JavaCV H264 conversion seems unnecessarily complex. I found it from this relevant question.

    I believe the issue lies in the fact that we need to decode both the byte array of length 6 (info array) and the byte array with current frame info simultaneously.

    Basically, DJI’s FPV feed comes in a number of formats.

    1. Raw H264 (MPEG4) in VideoFeeder.VideoDataListener
       // The callback for receiving the raw H264 video data for camera live view
       mReceivedVideoDataListener = new VideoFeeder.VideoDataListener() {
           @Override
           public void onReceive(byte[] videoBuffer, int size) {
               //Log.d("BytesReceived", Integer.toString(videoStreamFrameNumber));
               if (videoStreamFrameNumber++%30 == 0){
                   //convert video buffer to opencv array
                   OpenCvAndModelAsync openCvAndModelAsync = new OpenCvAndModelAsync();
                   openCvAndModelAsync.execute(videoBuffer);
               }
               if (mCodecManager != null) {
                   mCodecManager.sendDataToDecoder(videoBuffer, size);
               }
           }
       };
    1. DJI also has it’s own Android decoder sample with FFMPEG to convert to YUV format.
       @Override
       public void onYuvDataReceived(final ByteBuffer yuvFrame, int dataSize, final int width, final int height) {
           //In this demo, we test the YUV data by saving it into JPG files.
           //DJILog.d(TAG, "onYuvDataReceived " + dataSize);
           if (count++ % 30 == 0 &amp;&amp; yuvFrame != null) {
               final byte[] bytes = new byte[dataSize];
               yuvFrame.get(bytes);
               AsyncTask.execute(new Runnable() {
                   @Override
                   public void run() {
                       if (bytes.length >= width * height) {
                           Log.d("MatWidth", "Made it");
                           YuvImage yuvImage = saveYuvDataToJPEG(bytes, width, height);
                           Bitmap rgbYuvConvert = convertYuvImageToRgb(yuvImage, width, height);

                           Mat yuvMat = new Mat(height, width, CvType.CV_8UC1);
                           yuvMat.put(0, 0, bytes);
                           //OpenCv Stuff
                       }
                   }
               });
           }
       }

    Edit : For those who want to see DJI’s YUV to JPEG function, here it is from the sample application :

    private YuvImage saveYuvDataToJPEG(byte[] yuvFrame, int width, int height){
           byte[] y = new byte[width * height];
           byte[] u = new byte[width * height / 4];
           byte[] v = new byte[width * height / 4];
           byte[] nu = new byte[width * height / 4]; //
           byte[] nv = new byte[width * height / 4];

           System.arraycopy(yuvFrame, 0, y, 0, y.length);
           Log.d("MatY", y.toString());
           for (int i = 0; i &lt; u.length; i++) {
               v[i] = yuvFrame[y.length + 2 * i];
               u[i] = yuvFrame[y.length + 2 * i + 1];
           }
           int uvWidth = width / 2;
           int uvHeight = height / 2;
           for (int j = 0; j &lt; uvWidth / 2; j++) {
               for (int i = 0; i &lt; uvHeight / 2; i++) {
                   byte uSample1 = u[i * uvWidth + j];
                   byte uSample2 = u[i * uvWidth + j + uvWidth / 2];
                   byte vSample1 = v[(i + uvHeight / 2) * uvWidth + j];
                   byte vSample2 = v[(i + uvHeight / 2) * uvWidth + j + uvWidth / 2];
                   nu[2 * (i * uvWidth + j)] = uSample1;
                   nu[2 * (i * uvWidth + j) + 1] = uSample1;
                   nu[2 * (i * uvWidth + j) + uvWidth] = uSample2;
                   nu[2 * (i * uvWidth + j) + 1 + uvWidth] = uSample2;
                   nv[2 * (i * uvWidth + j)] = vSample1;
                   nv[2 * (i * uvWidth + j) + 1] = vSample1;
                   nv[2 * (i * uvWidth + j) + uvWidth] = vSample2;
                   nv[2 * (i * uvWidth + j) + 1 + uvWidth] = vSample2;
               }
           }
           //nv21test
           byte[] bytes = new byte[yuvFrame.length];
           System.arraycopy(y, 0, bytes, 0, y.length);
           for (int i = 0; i &lt; u.length; i++) {
               bytes[y.length + (i * 2)] = nv[i];
               bytes[y.length + (i * 2) + 1] = nu[i];
           }
           Log.d(TAG,
                 "onYuvDataReceived: frame index: "
                     + DJIVideoStreamDecoder.getInstance().frameIndex
                     + ",array length: "
                     + bytes.length);
           YuvImage yuver = screenShot(bytes,Environment.getExternalStorageDirectory() + "/DJI_ScreenShot", width, height);
           return yuver;
       }

       /**
        * Save the buffered data into a JPG image file
        */
       private YuvImage screenShot(byte[] buf, String shotDir, int width, int height) {
           File dir = new File(shotDir);
           if (!dir.exists() || !dir.isDirectory()) {
               dir.mkdirs();
           }
           YuvImage yuvImage = new YuvImage(buf,
                   ImageFormat.NV21,
                   width,
                   height,
                   null);

           OutputStream outputFile = null;

           final String path = dir + "/ScreenShot_" + System.currentTimeMillis() + ".jpg";

           try {
               outputFile = new FileOutputStream(new File(path));
           } catch (FileNotFoundException e) {
               Log.e(TAG, "test screenShot: new bitmap output file error: " + e);
               //return;
           }
           if (outputFile != null) {
               yuvImage.compressToJpeg(new Rect(0,
                       0,
                       width,
                       height), 100, outputFile);
           }
           try {
               outputFile.close();
           } catch (IOException e) {
               Log.e(TAG, "test screenShot: compress yuv image error: " + e);
               e.printStackTrace();
           }

           runOnUiThread(new Runnable() {
               @Override
               public void run() {
                   displayPath(path);
               }
           });
           return yuvImage;
       }
    1. DJI also appears to have a "getRgbaData" function, but there is literally not a single example online or by DJI. Go ahead and Google "DJI getRgbaData"... There’s only the reference to the api documentation that explains the self explanatory parameters and return values but nothing else. I couldn’t figure out where to call this and there doesn’t appear to be a callback function as there is with YUV. You can’t call it from the h264b byte array directly, but perhaps you can get it from the yuv data.

    Option 1 is much more preferable to option 2, since YUV format has quality issues. Option 3 would also likely involve a decoder.

    Here’s a screenshot that DJI’s own YUV conversion produces. WalletPhoneYuv

    I’ve looked at a bunch of things about how to improve the YUV, remove green and yellow colors and whatnot, but at this point if DJI can’t do it right, I don’t want to invest resources there.

    Regarding Option 1, I know there’s FFMPEG and JavaCV that seem like good options if I have to go the video decoding route.

    Moreover, from what I understand, OpenCV can’t handle reading and writing video files without FFMPEG, but I’m not trying to read a video file, I am trying to read an H264/MPEG4 byte[] array. The following code seems to get positive results.

       /* Async OpenCV Code */
       private class OpenCvAndModelAsync extends AsyncTask {
           @Override
           protected double[] doInBackground(byte[]... params) {//Background Code Executing. Don't touch any UI components
               //get fpv feed and convert bytes to mat array
               Mat videoBufMat = new Mat(4, params[0].length, CvType.CV_8UC4);
               videoBufMat.put(0,0, params[0]);
               //if I add this in it says the bytes are empty.
               //Mat videoBufMat = Imgcodecs.imdecode(encodeVideoBuf, Imgcodecs.IMREAD_ANYCOLOR);
               //encodeVideoBuf.release();
               Log.d("MatRgba", videoBufMat.toString());
               for (int i = 0; i&lt; videoBufMat.rows(); i++){
                   for (int j=0; j&lt; videoBufMat.cols(); j++){
                       double[] rgb = videoBufMat.get(i, j);
                       Log.i("Matrix", "red: "+rgb[0]+" green: "+rgb[1]+" blue: "+rgb[2]+" alpha: "
                               + rgb[3] + " Length: " + rgb.length + " Rows: "
                               + videoBufMat.rows() + " Columns: " + videoBufMat.cols());
                   }
               }
               double[] center = openCVThingy(videoBufMat);
               return center;
           }
           protected void onPostExecute(double[] center) {
               //handle ui or another async task if necessary
           }
       }

    Rows = 4, Columns > 30k. I get lots of RGB values that seem valid, such as red = 113, green=75, blue=90, alpha=220 as a made up example ; however, I get a ton of 0,0,0,0 values. That should be somewhat okay, since Black is 0,0,0 (although I would have thought the alpha would be higher) and I have a black object in my image. I also don’t seem to get any white values 255, 255, 255, even though there is also plenty of white area. I’m not logging the entire byte so it could be there, but I have yet to see it.

    However, when I try to compute the contours from this image, I almost always get that the moments (center x, y) are exactly in the center of the image. This error has nothing to do with my color filter or contours algorithm, as I wrote a script in python and tested that I implemented it correctly in Android by reading a still image and getting the exact same number of contours, position, etc in both Python and Android.

    I noticed it has something to do with the videoBuffer byte size (bonus points if you can explain why every other length is 6)

    2019-05-23 21:14:29.601 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 2425
    2019-05-23 21:14:29.802 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 2659
    2019-05-23 21:14:30.004 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:30.263 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6015
    2019-05-23 21:14:30.507 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:30.766 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4682
    2019-05-23 21:14:31.005 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:31.234 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 2840
    2019-05-23 21:14:31.433 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4482
    2019-05-23 21:14:31.664 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:31.927 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4768
    2019-05-23 21:14:32.174 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:32.433 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4700
    2019-05-23 21:14:32.668 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:32.864 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4740
    2019-05-23 21:14:33.102 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 6
    2019-05-23 21:14:33.365 21431-22086/com.dji.simulatorDemo D/VideoBufferSize: 4640

    My questions :

    I. Is this the correct format to read an h264 byte as mat ?
    Assuming the format is RGBA, that means row = 4 and columns = byte[].length, and CvType.CV_8UC4. Do I have height and width correct ? Something tells me YUV height and width is off. I was getting some meaningful results, but the contours were exactly in the center, just like with the H264.

    II. Does OpenCV handle MP4 in android like this ? If not, do we need to use FFMPEG or JavaCV ?

    III. Does the int size have something to do with it ? Why is the int size occassionally 6, and other times 2400 to 6000 ? I’ve heard about the difference between this frames information and information about the next frame, but I’m simply not knowledgeable enough to know how to apply that here.

    I’m starting to think this is where the issue lies. Since I need to get the 6 byte array for info about next frame, perhaps my modulo 30 is incorrect. So should I pass the 29th or 31st frame as a format byte for each frame ? How is that done in opencv or are we doomed to use the complicated ffmpeg ? How would I go about joining the neighboring frames/ byte arrays ?

    IV. Can I fix this using Imcodecs ? I was hoping opencv would natively handle whether a frame was color from this frame or info about next frame. I added the below code, but I am getting an empty array :

    Mat videoBufMat = Imgcodecs.imdecode(new MatOfByte(params[0]), Imgcodecs.IMREAD_UNCHANGED);

    This also is empty :

    Mat encodeVideoBuf = new Mat(4, params[0].length, CvType.CV_8UC4);
    encodeVideoBuf.put(0,0, params[0]);
    Mat videoBufMat = Imgcodecs.imdecode(encodeVideoBuf, Imgcodecs.IMREAD_UNCHANGED);

    V. Should I try converting the bytes into Android jpeg and then import it ? Why is djis yuv decoder so complicated looking ? It makes me cautious from wanting to try ffmpeg or Javacv and just stick to Android decoder or opencv decoder.

    VI. At what stage should I resize the frames to speed up calculations ?

    Edit : DJI support got back to me and confirmed they don’t have any samples for doing what I’ve described. This is a time for we the community to make this available for everyone !

    Upon further research, I don’t think opencv will be able to handle this as opencv’s android sdk has no functionality for video files/url’s (apart from a homegrown MJPEG codec).

    So is there a way in Android to convert to mjpeg or similar in order to read ? In my application, I only need 1 or 2 frames per second, so perhaps I can save the image as jpeg.

    But for real time applications we will likely need to write our own decoder. Please help so that we can make this available to everyone ! This question seems promising :