Recherche avancée

Médias (1)

Mot : - Tags -/Rennes

Autres articles (60)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (6806)

  • Stream real-time video flux in HTML video tag

    26 septembre 2018, par c.censier

    I want to stream a real-time video flux that come from udp into a HTML video tag.
    I made some research but I got a lot of informations and I struggle to have a clear overview of what I can do and what I can’t.

    The video flux use H.264 and AAC codecs, MP4 container and has a 3840x2160 (4K) resolution. I’d like to play it on Chrome (latest version).

    As I understand from now, HTML video tag can natively read H.264/AAC videos. I made it work with the video direclty on my server (I’m using Meteor JS + React).

    I learnt to use FFmpeg to stream an udp flux read by VLC player, and then I used FFserver (I know it’s deprecated) to create an HTTP flux also read by VLC but not by the HTML video tag.

    So... my question is : is HTML video can natively read video stream from HTTP ?

    I’ve seen a lot of discussions about HLS and DASH, but I didn’t understand if (and why) they’re mandatory.

    I read a post about someone creating a HLS m3u8 using only FFmpeg, is it a viable solution ?

    FFserver configuration

    HTTPPort                        8090
    HTTPBindAddress                 0.0.0.0
    MaxHTTPConnections              20
    MaxClients                      10
    MaxBandwidth                    100000

    <feed>
     File                          /tmp/feed.ffm
     FileMaxSize                   1g
     ACL allow                     127.0.0.1
    </feed>

    <stream>
     Feed                          feed.ffm
     Format                        mpeg
     AudioCodec                    aac
     AudioBitRate                  256
     AudioChannels                 1
     VideoCodec                    libx264
     VideoBitRate                  10000      // Total random here
     VideoBitRateRange             5000-15000 // And here...
     VideoFrameRate                30
     VideoQMin                     1
     VideoQMax                     50
     VideoSize                     3840x2160
     VideoBufferSize               20000      // Not sure either
     AVOptionVideo                 flags +global_header
    </stream>

    I had to specify QMin and QMax to avoid error message but I don’t really understand what is it.

    FFmpeg command line

    ffmpeg -re -i bbb_sunflower_2160p_30fps_normal.mp4 -strict -2 -r 30 -vcodec libx264 http://localhost:8090/feed.ffm

    This work with VLC. I’m working with a file on my computer before moving to an udp stream.

  • Pipe video frames from ffmpeg to canvas without loading the entire video into memory

    1er janvier 2024, par Aviato

    I am working on a project that involves frame manipulation and I decided to choose node canvas API for that. I used to work with OpenCV Python and there was a cv2.VideoCapture class that takes a video as input and prepares to read the frames of the video and we can loop through the frames one at a time without having to load all the frames at once in memory.&#xA;Now I tried a lot of ways to replicate the same using ffmpeg, i.e. trying to load frames from a video in an ordered, but "on-demand," fashion.

    &#xA;

    I tried using ffmpeg as a child process to process frames and standout the frames.

    &#xA;

    const spawnProcess = require(&#x27;child_process&#x27;).spawn,&#xA;    ffmpeg = spawnProcess(&#x27;ffmpeg&#x27;, [&#xA;        &#x27;-i&#x27;, &#x27;test.mp4&#x27;,&#xA;        &#x27;-vcodec&#x27;, &#x27;png&#x27;,&#xA;        &#x27;-f&#x27;, &#x27;rawvideo&#x27;,&#xA;        &#x27;-s&#x27;, &#x27;1920*1080&#x27;, // size of one frame&#xA;        &#x27;pipe:1&#x27;&#xA;    ]);&#xA;ffmpeg.stdout.on(&#x27;data&#x27;, (data) => {&#xA;    try {&#xA;        // console.log(tf.node.decodeImage(data).shape)&#xA;        console.log(`${&#x2B;&#x2B;i} frames read`)&#xA;        //context.drawImage(data, 0, 0, width, height)&#xA;        &#xA;        &#xA;    } catch(e) {&#xA;        console.log(e)&#xA;    } &#xA;})&#xA;

    &#xA;

    The value in the console shows something around 4000 + console logs, but the video only had 150 frames, after much investigating and console logging the data, I found that was buffer data, and it's not processing it for each frame. The on-data function returns the buffer data in an unstructured way&#xA;I want to read frames from a video and process each one at a time in memory, I don't want to hold all the frames at once in memory or in the filesystem.

    &#xA;

    enter image description here

    &#xA;

    I also want to pipe the frames in a format that could be rendered on top of a canvas using drawImage

    &#xA;

  • ffmpeg black screen issue for video video generation from a list of frames

    11 mai 2023, par arlaine

    I used a video to generate a list of frames from it, then I wanted to create multiple videos from this list of frames.&#xA;I've set starting and ending frames indexes for each "sub video", so for example,&#xA;indexes = [[0, 64], [64, 110], [110, 234], [234, 449]], and those indexes will help my code generate 4 videos of various durations. The idea is to decompose the original video into multiple sub videos. My code is working just fine, the video generated.

    &#xA;

    But every sub video start with multiple seconds of black screen, only the first generated video (so the one using indexes[0] for starting and ending frames) is generated without this black screen part. I've tried changing the frame rate for each sub_video, according to the number of frames and things like that, but I didn't work. You can find my code below

    &#xA;

    for i, (start_idx, end_idx) in enumerate(self.video_frames_indexes):&#xA;    if end_idx - start_idx > 10:&#xA;        shape = cv2.imread(f&#x27;output/video_reconstitution/{video_name}/final/frame_{start_idx}.jpg&#x27;).shape&#xA;        os.system(f&#x27;ffmpeg -r 30 -s {shape[0]}x{shape[1]} -i output/video_reconstitution/{video_name}/final/frame_%d.JPG&#x27;&#xA;              f&#x27; -vf "select=between(n\,{start_idx}\,{end_idx})" -vcodec libx264 -crf 25&#x27;&#xA;              f&#x27; output/video_reconstitution/IMG_7303/sub_videos/serrage_{i}.mp4&#x27;)&#xA;

    &#xA;

    Just the ffmpeg command

    &#xA;

    ffmpeg -r 30 -s {shape[0]}x{shape[1]} -i output/video_reconstitution/{video_name}/final/frame_%d.JPG -vf "select=between(n\,{start_idx}\,{end_idx})" -vcodec libx264 -crf 25 output/video_reconstitution/IMG_7303/sub_videos/serrage_{i}.mp4&#xA;

    &#xA;