Recherche avancée

Médias (1)

Mot : - Tags -/musée

Autres articles (25)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (6778)

  • Raw extraction of frames from a movie

    29 septembre 2016, par vkubicki

    I would like to extract images from a grayscale mj2 movie. Each pixel is encoded using 16 bits. Since this is a technical movie, I need to extract the value at each pixel without processing, as those values linearly map to a physical quantity (a heatmap from an infrared camera). I am using Scala, and I do not find any suitable solution for a direct extraction (either in Scala or in Java, but I am a beginner). Therefore I intend to use ffmpeg to extract individual frames on the disk, then load them as BufferedImage in Scala and process them.

    Is this a good approach ? Which format should I use to avoid any transformation in the data ? I want each extracted frame to ba as "raw" as possible ? Is it possible to directly output a csv containing the aforementionned values ?

  • For modifying MPEG-2 Part 4 video, which is the easiest library/approach can I use ?

    17 décembre 2015, par liamzebedee

    I’m trying to implement a video watermarking system which modifies a subset of individual pixels (i.e. the RGB values at sets of x,y). The base use case would be modifying an MP4, which consists of modifying the contained MPEG-2 Part 4 Video stream.

    I’ve done some research, and have found that it isn’t as simple as just modifying the raw frames, as the ubiquitous P-frames and B-frames rely on compressing the output by only storing the differences between frames.

    I’m relatively technology-agnostic, I just want to find a solution. Which library/framework should I use (seems like ffmpeg for now) and which approach do I take ?

  • Decode h264 video bytes into JPEG frames in memory with ffmpeg

    5 février 2024, par John Karkas

    I'm using python and ffmpeg (4.4.2) to generate a h264 video stream from images produced continuously from a process. I am aiming to send this stream over websocket connection and decode it to individual image frames at the receiving end, and emulate a stream by continuously pushing frames to an <img style='max-width: 300px; max-height: 300px' /> tag in my HTML.

    &#xA;

    However, I cannot read images at the receiving end, after trying combinations of rawvideo input format, image2pipe format, re-encoding the incoming stream with mjpeg and png, etc. So I would be happy to know what the standard way of doing something like this would be.

    &#xA;

    At the source, I'm piping frames from a while loop into ffmpeg to assemble a h264 encoded video. My command is :

    &#xA;

            command = [&#xA;            &#x27;ffmpeg&#x27;,&#xA;            &#x27;-f&#x27;, &#x27;rawvideo&#x27;,&#xA;            &#x27;-pix_fmt&#x27;, &#x27;rgb24&#x27;,&#xA;            &#x27;-s&#x27;, f&#x27;{shape[1]}x{shape[0]}&#x27;,&#xA;            &#x27;-re&#x27;,&#xA;            &#x27;-i&#x27;, &#x27;pipe:&#x27;,&#xA;            &#x27;-vcodec&#x27;, &#x27;h264&#x27;,&#xA;            &#x27;-f&#x27;, &#x27;rawvideo&#x27;,&#xA;            # &#x27;-vsync&#x27;, &#x27;vfr&#x27;,&#xA;            &#x27;-hide_banner&#x27;,&#xA;            &#x27;-loglevel&#x27;, &#x27;error&#x27;,&#xA;            &#x27;pipe:&#x27;&#xA;        ]&#xA;

    &#xA;

    At the receiving end of the websocket connection, I can store the images in storage by including :

    &#xA;

            command = [&#xA;            &#x27;ffmpeg&#x27;,&#xA;            &#x27;-i&#x27;, &#x27;-&#x27;,  # Read from stdin&#xA;            &#x27;-c:v&#x27;, &#x27;mjpeg&#x27;,&#xA;            &#x27;-f&#x27;, &#x27;image2&#x27;,&#xA;            &#x27;-hide_banner&#x27;,&#xA;            &#x27;-loglevel&#x27;, &#x27;error&#x27;,&#xA;            f&#x27;encoded/img_%d_encoded.jpg&#x27;&#xA;        ]&#xA;

    &#xA;

    in my ffmpeg command.

    &#xA;

    But, I want to instead extract each individual frame coming in the pipe and load in my application, without saving them in storage. So basically, I want whatever is happening at by the &#x27;encoded/img_%d_encoded.jpg&#x27; line in ffmpeg, but allowing me to access each frame in the stdout subprocess pipe of an ffmpeg pipeline at the receiving end, running in its own thread.

    &#xA;

      &#xA;
    • What would be the most appropriate ffmpeg command to fulfil a use case like the above ? And how could it be tuned to be faster or have more quality ?
    • &#xA;

    • Would I be able to read from the stdout buffer with process.stdout.read(2560x1440x3) for each frame ?
    • &#xA;

    &#xA;

    If you feel strongly about referring me to a more update version of ffmpeg, please do so.

    &#xA;

    PS : It is understandable this may not be the optimal way to create a stream. Nevertheless, I do not find there should be much complexity in this and the latency should be low. I could instead communicate JPEG images via the websocket and view them in my <img style='max-width: 300px; max-height: 300px' /> tag, but I want to save on bandwidth and relay some computational effort at the receiving end.

    &#xA;