Recherche avancée

Médias (91)

Autres articles (18)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

Sur d’autres sites (3564)

  • How to keep personally identifiable information safe

    23 janvier 2020, par Joselyn Khor

    The protection of personally identifiable information (PII) is important both for individuals, whose privacy may be compromised, and for businesses that may have their reputation ruined or be liable if PII is wrongly accessed, used, or shared.

    Curious about what PII is ? Here’s your introduction to personally identifiable information.

    Due to hacking, data leaks or data thievery, PII acquired can be combined with other pieces of information to form a more complete picture of you. On an individual level, this puts you at risk of identity theft, credit card theft or other harm caused by the fraudulent use of your personal information.

    On a business level, for companies who breach data privacy laws – like Cambridge Analytica’s harvesting of millions of FB profiles – the action leads to an erosion of trust. It can also impact your financial position as heavy fines can be imposed for the illegal use and processing of personally identifiable information.

    So what can you do to ensure PII compliance ?

    On an individual level :

    1. Don’t give your data away so easily. Although long, it’s worthwhile to read through privacy policies to make sure you know what you’re getting yourself into.
    2. Don’t just click ‘agree’ when faced with consent screens, as consent screens are majorly flawed. Users mostly always opt in without reading and without being properly informed what they opt in to.
    3. Did you know you’re most likely being tracked from website to website ? For example, Google can identify you across visits and websites. One of the things you can do is to disable third party cookies by default. Businesses can also use privacy friendly analytics which halt such tracking. 
    4. Use strong passwords.
    5. Be wary of public wifi – hackers can easily access your PII or sensitive data. Use a VPN (virtual private network), which lets you create a secure connection to a server of your choosing. This allows you to browse the internet in a safe manner.

    A PII compliance checklist for businesses/organisations :

    1. Identify where all PII exists and is stored – review and make sure this is in a safe environment.
    2. Identify laws that apply to you (GDPR, California privacy law, HIPAA) and follow your legal obligations.
    3. Create operational safeguards – policies and procedures for handling PII at an organisation level ; and building awareness to focus on the protection of PII.
    4. Encrypt databases and repositories where such info is kept.
    5. Create privacy-specific safeguards in the way your organisation collects, maintains, uses, and disseminates data so you protect the confidentiality of the data.
    6. Minimise the use, collection, and retention of PII – only collect and keep PII if it’s necessary for you to perform your legal business function.
    7. Conduct privacy impact assessments (PIA) to find and prevent privacy risks (identify what and why it’s to be collected ; how the information will be secured etc.).
    8. De-identify within the scope of your data collection and analytics tools.
    9. Anonymise data.
    10. Keep your privacy policy updated.
    11. Pseudonymisation.
    12. A more comprehensive guide for businesses can be found here : https://iapp.org/media/pdf/knowledge_center/NIST_Protecting_PII.pdf
  • Adding ffmpeg OMX codec to Genymotion Android 4.4.2 emulator

    22 avril 2016, par photon

    Basic Question :

    Is there a way to add a new audio codec to the Genymotion Android emulator, short of downloading the entire Android source, learning how to build it, and creating my own version of Android ?


    Context :

    I have written a java Android app that acts as an audio renderer, as well as being a DLNA/OpenHome server and client. Think "BubbleUpnp" without video. My primary development platform is Win8.1. The program started as an ActiveState "pure-perl" DLNA MediaServer on Windows, which I then ported to Ubuntu, which I got working under Android a few years ago. It was pretty funky ... all UI being presented thru an HTTP server/jquery/jquery-ui, served from an Ubuntu shell running under Android (a trick in itself), serving up HTML pages to Chrome running on the same (Android) device. Besides being "funky" it had a major drawback that it required a valid IP address to work ... as I could not figure out how to get ubuntu to have a local loopback device for a 127.0.0.01 localhost I use the app as a "car stereo" on my boat (which is my home), which is often not hooked up to the internet.

    I had a hard time getting started in Android app development because the speed of the Android emulators in Eclipse was horrid, and the ADB drivers did not work from Win8 for the longest time.

    Then one day, about a year ago, I ran into Genymotion (kudos to the authors), and all of a sudden I had a workable Android development environment, so I added a Java implementation of the DLNA server, which then grew into a renderer also, using Android’s MediaPlayer class, and, adding the ability to act as a DLNA control point, and more recently also added OpenHome servers and renderers to it.

    In a separate effort, I created a build environment for this program called fpCalc, based on ffMpeg, on a variety of platforms, including Win, Linux, and Android x86, arm, and arm7 devices (bitbucket.org/phorton1/) and did an extensive series of tests to determine the validity, and longevity of fpcalc fingerprints, discovering that the fpCalc fingerprint changed based on the version of ffmpeg it was built against, a separate topic to be sure, but in the process, learned at least a bit about how to build ffmpeg as well as Android shared libraries, JNI interfaces, etc.

    So now the Android-Java version of the program has advanced past the old perl version, and I am debating whether I want to continue to try to build the perl version (and or add an wxPerl UI) to it.

    One issue that has arisen, for me, is that the Genymotion emulator does not support WMA decoding ... as Android dropped support for WMA due to licensing issues, etc, a ways back in time ... yet my music library has significant numbers of tunes in WMA files, and I don’t want to "convert" them, my carefully thought-out philosophy is that my program does not modify the contents, or tags, or anything in the original media files that I have accumulated, or will receive in the future, rather treating them as "artifacts" worth preserving "as is". No conversion is going to make a file "better" than it was, and I wish to preserve ALL of the original sources for ALL of my music going forward.

    So, I’m thinking, gee, I can build FFMPEG on 7 different platforms, and I see all these references to "OMX FFMPEG Codec Support for Android" on the net, so I’m thinking, "All I need to do is create the OMX Component and somehow get it into Genymotion".

    I have studied up OMX, OpenMaxIL, seen Michael Chen’s posts, seen the stack overflow questions

    How to make ffmpeg codec componet as OMX component

    and

    Android : How to integrate a decoder to multimedia framework

    and Cedric Fung’s page https://vec.io/posts/use-android-hardware-decoder-with-omxcodec-in-ndk, and Michael Chen’s repository at https://github.com/omxcodec , as well as virtually every other page on the net that mentions any combination of libstagefright, OMX, Genymotion, and FFMPEG.

    (this page would not let me put more than 2 links as i don’t have a "10" reputation, or I would have listed some of the sources I have seen) ..

    My Linux development environment is a Ubuntu12.04 vbox running on my win machine. I have downloaded and run the Android-x86 iso as a vbox, and IT contains the ffmpeg codecs, but unfortunately, it neither supports a wifi interface, nor the vbox "guest additions", so it has a really funky mouse. I tried for about 3 days to address those two issues, but in the end do not feel it is usable for my puproses, and I really like the way genymotion "feels", particularly the moust support, so I’d like to keep genymotion as my "windows android" virtual device under which I may run my program, deprecate and stop using my old perl source,

    except genymotion does not support WMA files ...


    Several side notes :

    (a) There is no good way to write a single sourced application in Java that runs natively in Windows, AND as an Android app.

    (b) I don’t want to reboot my Windows machine to a "real" Android device just to play my music files. The machine has to stay in Windows as I use it for other things as well.

    (c) I am writing this as my machine is in the 36th hour of downloading the entire ASOP source code base to a partition in my Ubuntu vbox while I am sitting in a hotel room on a not-so-good internet connection in Panama City, Panama, before I return to my boat in remote Bocas Del Toro Panama, where the internet connection is even worse.

    (d) I did get WMA decoding to work in my app by calling my FFMPEG executable from Java (converting it to either WAV/PCM or AAC), but, because of limitations in Android’s MediaPlayer, it does not work well, particularly for remotely hosted WMA files ... MediaPlayer insists on having the whole file present before it starts to play, which can take several seconds or longer, and I am hoping that by getting a ’real’ WMA codec underneath MediaPlayer, that problem will just disappear ....


    So, I’m trying to figure this whole mess out. There are a lot of tantalizing clues, and suggestions, but what I have found, or at least what I am starting to believe, is that if I want to add a simple WMA audio decoding codec to Android (Genymotion), not only do I have to download, basically, the ENTIRE ASOP Android source tree, and learn a new set of tools (repo, etc), but I have to (be able to) rebuild, from scratch, the entire Android system, esp. libstagefright.so in such a way as to be COMPLETELY compatible with the existing one in GenyMotion, while at the same time adding ffmpeg codecs ala Michael Chen’s page.

    And I’m just asking, is it, could it really be that difficult ?


    Anyways, this makes me crazy. Is there no way to just build a new component, or at worst a new OMX core, and add it to Genymotion, WITHOUT building all of Android, and preferably, based only on the OMX h files ? Or do I REALLY have to replace the existing libstagefright.so, which means, basically, rebuilding all of Android ...

    p.s. I thought it would be nice to get this figured out, build it, and then post the installable new FFMPEG codecs someplace for other people to use, so that they don’t also grow warts on their ears and have steam shooting out of their eyeballs, while they get old trying to figure it out ....

  • Vulkan image data to AVFrames and to video

    12 avril 2024, par W4zab1

    I am trying to encode Vulkan image data into video with MPEG4 format. For some reason the output videofile is corrupted. FFProbe shows discontinuity in timestamps, and the frames are corrupted.
First I prepare my video encoder
    
Then I get FrameEnded events from my engine where I can get the image data from the vulkan swapchain.
    
I then convert the image data from vulkan to AVFrames (RGBA to YUV420P), then I pass the frames into queue.
    
This queue is then handled in another thread, where the frames are processed, and written into video.
    
I am bit of a noob with ffmpeg, so there can be some code that does not make sense.

    


    This seems right straight forward logic, but there is probably some problems with codec params, way I am converting the imagedata to AVFrame, or something of that sort.
    
The videofile still gets created, and has some data in it (it is > 0 bytes, and longer the recording, bigger the filesize).
    
There is no errors from ffmpeg with log_level set to DEBUG.

    


    struct FrameData {&#xA;    AVFrame* frame;&#xA;    int frame_index;&#xA;};&#xA;&#xA;class EventListenerVideoCapture : public VEEventListenerGLFW {&#xA;private:&#xA;    AVFormatContext* format_ctx = nullptr;&#xA;    AVCodec* video_codec = nullptr;&#xA;    AVCodecContext* codec_context = nullptr;&#xA;    AVStream* video_stream = nullptr;&#xA;    AVDictionary* muxer_opts = nullptr;&#xA;    int frame_index = 0;&#xA;&#xA;    std::queue frame_queue;&#xA;    std::mutex queue_mtx;&#xA;    std::condition_variable queue_cv;&#xA;    std::atomic<bool> stop_processing{ false };&#xA;    std::thread video_processing_thread;&#xA;    int prepare_video_encoder()&#xA;    {&#xA;        av_log_set_level(AV_LOG_DEBUG);&#xA;        // Add video stream to format context&#xA;        avformat_alloc_output_context2(&amp;format_ctx, nullptr, nullptr, "video.mpg");&#xA;        video_stream = avformat_new_stream(format_ctx, NULL);&#xA;        video_codec = (AVCodec*)avcodec_find_encoder(AV_CODEC_ID_MPEG4);&#xA;        codec_context = avcodec_alloc_context3(video_codec);&#xA;        if (!format_ctx) { std::cerr &lt;&lt; "Error: Failed to allocate format context" &lt;&lt; std::endl; system("pause"); }&#xA;        if (!video_stream) { std::cerr &lt;&lt; "Error: Failed to create new stream" &lt;&lt; std::endl; system("pause"); }&#xA;        if (!video_codec) { std::cerr &lt;&lt; "Error: Failed to find video codec" &lt;&lt; std::endl; system("pause"); }&#xA;        if (!codec_context) { std::cerr &lt;&lt; "Error: Failed to allocate codec context" &lt;&lt; std::endl; system("pause"); }&#xA;&#xA;        if (avio_open(&amp;format_ctx->pb, "video.mpg", AVIO_FLAG_WRITE) &lt; 0) { std::cerr &lt;&lt; "Error: Failed to open file for writing!" &lt;&lt; std::endl; return -1; }&#xA;&#xA;        av_opt_set(codec_context->priv_data, "preset", "fast", 0);&#xA;&#xA;        codec_context->codec_id = AV_CODEC_ID_MPEG4;&#xA;        codec_context->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;        codec_context->pix_fmt = AV_PIX_FMT_YUV420P;&#xA;        codec_context->width = getWindowPointer()->getExtent().width;&#xA;        codec_context->height = getWindowPointer()->getExtent().height;&#xA;        codec_context->bit_rate = 1000 * 1000; // Bitrate&#xA;        codec_context->time_base = { 1, 30 }; // 30 FPS&#xA;        codec_context->gop_size = 10;&#xA;&#xA;        av_dict_set(&amp;muxer_opts, "movflags", "faststart", 0);&#xA;&#xA;        //Unecessary? Since the params are copied anyways&#xA;        video_stream->time_base = codec_context->time_base;&#xA;&#xA;        //Try to open codec after changes&#xA;        //copy codec_context params to videostream&#xA;        //and write headers to format_context&#xA;        if (avcodec_open2(codec_context, video_codec, NULL) &lt; 0) { std::cerr &lt;&lt; "Error: Could not open codec!" &lt;&lt; std::endl; return -1; }&#xA;        if (avcodec_parameters_from_context(video_stream->codecpar, codec_context) &lt; 0) { std::cerr &lt;&lt; "Error: Could not copy params from context to stream!" &lt;&lt; std::endl; return -1; };&#xA;        if (avformat_write_header(format_ctx, &amp;muxer_opts) &lt; 0) { std::cerr &lt;&lt; "Error: Failed to write output file headers!" &lt;&lt; std::endl; return -1; }&#xA;        return 0;&#xA;    }&#xA;&#xA;    void processFrames() {&#xA;        while (!stop_processing) {&#xA;            FrameData* frameData = nullptr;&#xA;            {&#xA;                std::unique_lock lock(queue_mtx);&#xA;                queue_cv.wait(lock, [&amp;]() { return !frame_queue.empty() || stop_processing; });&#xA;&#xA;                if (stop_processing &amp;&amp; frame_queue.empty())&#xA;                    break;&#xA;&#xA;                frameData = frame_queue.front();&#xA;                frame_queue.pop();&#xA;            }&#xA;&#xA;            if (frameData) {&#xA;                encodeAndWriteFrame(frameData);&#xA;                AVFrame* frame = frameData->frame;&#xA;                av_frame_free(&amp;frame); // Free the processed frame&#xA;                delete frameData;&#xA;            }&#xA;        }&#xA;    }&#xA;&#xA;    void encodeAndWriteFrame(FrameData* frameData) {&#xA;&#xA;        // Validation&#xA;        if (!frameData->frame) { std::cerr &lt;&lt; "Error: Frame was null! " &lt;&lt; std::endl; return; }&#xA;        if (frameData->frame->format != codec_context->pix_fmt) { std::cerr &lt;&lt; "Error: Frame format mismatch!" &lt;&lt; std::endl; return; }&#xA;        if ( av_frame_get_buffer(frameData->frame, 0) &lt; 0) { std::cerr &lt;&lt; "Error allocating frame buffer: " &lt;&lt; std::endl; return; }&#xA;        if (!codec_context) return;&#xA;&#xA;        AVPacket* pkt = av_packet_alloc();&#xA;        if (!pkt) { std::cerr &lt;&lt; "Error: Failed to allocate AVPacket" &lt;&lt; std::endl; system("pause"); }&#xA;&#xA;        int ret = avcodec_send_frame(codec_context, frameData->frame);&#xA;        if (ret &lt; 0) { &#xA;            std::cerr &lt;&lt; "Error receiving packet from codec: " &lt;&lt; ret &lt;&lt; std::endl;&#xA;            delete frameData;&#xA;            av_packet_free(&amp;pkt); return; &#xA;        }&#xA;&#xA;        while (ret >= 0) {&#xA;            ret = avcodec_receive_packet(codec_context, pkt);&#xA;&#xA;            //Error checks&#xA;            if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) { break; }&#xA;            else if (ret &lt; 0) { std::cerr &lt;&lt; "Error receiving packet from codec: " &lt;&lt; ret &lt;&lt; std::endl; av_packet_free(&amp;pkt); return; }&#xA;            if (!video_stream) { std::cerr &lt;&lt; "Error: video stream is null!" &lt;&lt; std::endl; av_packet_free(&amp;pkt); return; }&#xA;            &#xA;            int64_t frame_duration = codec_context->time_base.den / codec_context->time_base.num;&#xA;            pkt->stream_index = video_stream->index;&#xA;            pkt->duration = frame_duration;&#xA;            pkt->pts = frameData->frame_index * frame_duration;&#xA;&#xA;            int write_ret = av_interleaved_write_frame(format_ctx, pkt);&#xA;            if (write_ret &lt; 0) { std::cerr &lt;&lt; "Error: failed to write a frame! " &lt;&lt; write_ret &lt;&lt; std::endl;}&#xA;&#xA;            av_packet_unref(pkt);&#xA;        }&#xA;&#xA;        av_packet_free(&amp;pkt);&#xA;&#xA;    }&#xA;&#xA;protected:&#xA;    virtual void onFrameEnded(veEvent event) override {&#xA;        // Get the image data from vulkan&#xA;        VkExtent2D extent = getWindowPointer()->getExtent();&#xA;        uint32_t imageSize = extent.width * extent.height * 4;&#xA;        VkImage image = getEnginePointer()->getRenderer()->getSwapChainImage();&#xA;&#xA;        uint8_t *dataImage = new uint8_t[imageSize];&#xA;        &#xA;        vh::vhBufCopySwapChainImageToHost(getEnginePointer()->getRenderer()->getDevice(),&#xA;            getEnginePointer()->getRenderer()->getVmaAllocator(),&#xA;            getEnginePointer()->getRenderer()->getGraphicsQueue(),&#xA;            getEnginePointer()->getRenderer()->getCommandPool(),&#xA;            image, VK_FORMAT_R8G8B8A8_UNORM,&#xA;            VK_IMAGE_ASPECT_COLOR_BIT, VK_IMAGE_LAYOUT_PRESENT_SRC_KHR,&#xA;            dataImage, extent.width, extent.height, imageSize);&#xA;        &#xA;        // Create AVFrame for the converted image data&#xA;        AVFrame* frame = av_frame_alloc();&#xA;        if (!frame) { std::cout &lt;&lt; "Could not allocate memory for frame!" &lt;&lt; std::endl; return; }&#xA;&#xA;        frame->format = AV_PIX_FMT_YUV420P;&#xA;        frame->width = extent.width;&#xA;        frame->height = extent.height;&#xA;        if (av_frame_get_buffer(frame, 0) &lt; 0) { std::cerr &lt;&lt; "Failed to allocate frame buffer! " &lt;&lt; std::endl; return;} ;&#xA;&#xA;        // Prepare context for converting from RGBA to YUV420P&#xA;        SwsContext* sws_ctx = sws_getContext(&#xA;            extent.width, extent.height, AV_PIX_FMT_RGBA,&#xA;            extent.width, extent.height, AV_PIX_FMT_YUV420P,&#xA;            SWS_BILINEAR, nullptr, nullptr, nullptr);&#xA;&#xA;        // Convert the vulkan image data to AVFrame&#xA;        uint8_t* src_data[1] = { dataImage };&#xA;        int src_linesize[1] = { extent.width * 4 };&#xA;        int scale_ret = sws_scale(sws_ctx, src_data, src_linesize, 0, extent.height,&#xA;                  frame->data, frame->linesize);&#xA;&#xA;        if (scale_ret &lt;= 0) { std::cerr &lt;&lt; "Failed to scale the image to frame" &lt;&lt; std::endl; return; }&#xA;&#xA;        sws_freeContext(sws_ctx);&#xA;        delete[] dataImage;&#xA;&#xA;        // Add frame to the queue&#xA;        {&#xA;            std::lock_guard lock(queue_mtx);&#xA;&#xA;            FrameData* frameData = new FrameData;&#xA;            frameData->frame = frame;&#xA;            frameData->frame_index = frame_index;&#xA;            frame_queue.push(frameData);&#xA;&#xA;            frame_index&#x2B;&#x2B;;&#xA;        }&#xA;&#xA;        // Notify processing thread&#xA;        queue_cv.notify_one();&#xA;    }&#xA;&#xA;public:&#xA;    EventListenerVideoCapture(std::string name) : VEEventListenerGLFW(name) {&#xA;        //Prepare the video encoder&#xA;        int ret = prepare_video_encoder();&#xA;        if (ret &lt; 0)&#xA;        {&#xA;            std::cerr &lt;&lt; "Failed to prepare video encoder! " &lt;&lt; std::endl;&#xA;            exit(-1);&#xA;        }&#xA;        else&#xA;        {&#xA;            // Start video processing thread&#xA;            video_processing_thread = std::thread(&amp;EventListenerVideoCapture::processFrames, this);&#xA;        }&#xA;    }&#xA;&#xA;    ~EventListenerVideoCapture() {&#xA;        // Stop video processing thread&#xA;        stop_processing = true;&#xA;        queue_cv.notify_one(); // Notify processing thread to stop&#xA;&#xA;        if (video_processing_thread.joinable()) {&#xA;            video_processing_thread.join();&#xA;        }&#xA;&#xA;        // Flush codec and close output file&#xA;        avcodec_send_frame(codec_context, nullptr);&#xA;        av_write_trailer(format_ctx);&#xA;&#xA;        av_dict_free(&amp;muxer_opts);&#xA;        avio_closep(&amp;format_ctx->pb);&#xA;        avcodec_free_context(&amp;codec_context);&#xA;        avformat_free_context(format_ctx);&#xA;    }&#xA;};&#xA;&#xA;</bool>

    &#xA;

    I have tried changing the codec params, debugging and printing the videoframe data with no success.

    &#xA;