Recherche avancée

Médias (91)

Autres articles (14)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

Sur d’autres sites (2289)

  • bash - Parse folder name, generate gif from images inside using ffmpeg

    16 janvier 2023, par BestMordaEver

    I have 150 folders with some nasty folder structure. Inside are animation frames helpfully named 1.png, 2.png and so on. I need to generate gifs from those images and have them named properly. Here's how I'd handle this case by case :

    


    Folder ./dragon/raw/bodies/dragon/green/old/5/down/idle has 25 .png files. I run the following commands :

    


    # pngs to gif
ffmpeg -framerate 12 -start_number 1 -i ./dragon/raw/bodies/dragon/green/old/5/down/idle/%d.png dragon-green-old-5-down-idle-intermediate.gif

# gif to boomerang gif
ffmpeg -i dragon-green-old-5-down-idle-intermediate.gif -filter_complex "[0]trim=start_frame=1:end_frame=29,setpts=PTS-STARTPTS,reverse[r];[0][r]concat=n=2:v=1:a=0,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse" dragon-green-old-5-down-idle.gif

# remove the intermediate gif
rm dragon-green-old-5-down-idle-intermediate.gif


    


    Note : relevant part of the name for gif is whatever's after /raw/bodies

    


    Question : how do I put this in a loop that would iterate over all the nested folders, parse their names and feed the result in the commands above ?

    


    Bonus question : is there a way to merge the two commands and create the boomerang gif immediately, removing the need in intermediate files ?

    


  • How do I add a text for the 1st 30 seconds inside a filter_complex expression for each video part ?

    30 décembre 2022, par PirateApp

    I am generating a video grid using the following filter_complex command

    


    ffmpeg
     -i v_nimble_guardian.mkv -i macko_nimble_guardian.mkv -i ghost_nimble_guardian_1.mp4 -i nano_nimble_guardian.mkv
     -filter_complex "
         nullsrc=size=3840x2160 [base];
         [0:v] trim=start=39.117000,setpts=PTS-STARTPTS, scale=1920x1080 [upperleft];
         [1:v] trim=start=40.483000,setpts=PTS-STARTPTS, scale=1920x1080 [upperright];
         [2:v] trim=start=32.416471,setpts=PTS-STARTPTS, scale=1920x1080 [lowerleft];
         [3:v] trim=start=28.100000,setpts=PTS-STARTPTS, scale=1920x1080 [lowerright];
         [3:a] atrim=start=28.100000,asetpts=PTS-STARTPTS[outa];
         [base][upperleft] overlay=shortest=1 [tmp1];
         [tmp1][upperright] overlay=shortest=1:x=1920 [tmp2];
         [tmp2][lowerleft] overlay=shortest=1:y=1080 [tmp3];
         [tmp3][lowerright] overlay=shortest=1:x=1920:y=1080[v]
     "
     -map "[v]" -map "[outa]" -c:v libx264 -crf 17 -shortest -t 880 output4k.mkv


    


    How do I text to this video grid that will appear with a fade in at 10 seconds, stay for 30 seconds and then fade out ?

    


    enter image description here

    


    What I tried ?

    


    ffmpeg
     -i v.mkv -i macko_nimble_guardian.mkv -i ghost_nimble_guardian_subtle_arrow_1.mp4 -i nano_nimble_guardian.mkv
     -filter_complex "
         nullsrc=size=1920x1080 [base];
         drawtext=text='Summer Video':enable='between(t,10,30)'[fg];
         [0:v] trim=start=39.117000,setpts=PTS-STARTPTS, scale=960x540 [upperleft];
         [1:v] trim=start=40.483000,setpts=PTS-STARTPTS, scale=960x540 [upperright];
         [2:v] trim=start=32.416471,setpts=PTS-STARTPTS, scale=960x540 [lowerleft];
         [3:v] trim=start=28.100000,setpts=PTS-STARTPTS, scale=960x540 [lowerright];
         [3:a] atrim=start=28.100000,asetpts=PTS-STARTPTS[outa];
         [base][upperleft] overlay=shortest=1 [tmp1];
         [tmp1][upperright] overlay=shortest=1:x=960 [tmp2];
         [tmp2][lowerleft] overlay=shortest=1:y=540 [tmp3];
         [tmp3][lowerright] overlay=shortest=1:x=960:y=540[v]
     "
     -map "[v]" -map "[outa]" -c:v libx264 -shortest -t '30' output2.mkv


    


    It gives me an error

    


    [Parsed_drawtext_1 @ 0x600002bdc420] Using "/System/Library/Fonts/Supplemental/Verdana.ttf"
Filter drawtext:default has an unconnected output


    


  • how to convert an rstp ffmpeg mp4 buffer into ImageData inside nodejs (nestjs)

    23 décembre 2022, par distante

    I am reading an rtsp stream using ffmpeg in this way :

    


      const ffmpegArgs = [
    '-rtsp_transport',
    'tcp',
    '-i',
    this.rtspStreamPath,
    '-vcodec',
    'copy',
    '-f',
    'mp4',
    '-movflags',
    'frag_keyframe+empty_moov',
    '-reset_timestamps',
    '1',
    '-vsync',
    '1',
    '-flags',
    'global_header',
    '-bsf:v',
    'dump_extra',
    '-y',
    '-', // output to stdout
  ];
    const liveffmpeg = spawn(ffmpegPath, ffmpegArgs, {
      detached: false,
    });

    liveffmpeg.stdout.on('data', (data) => {
      console.log('data here!', data.toString());
    });


    


    Later I pipe the liveffmpeg.stdout to the response object of my NestJS controller so it can be consumed by the frontend in a video HTML Element.

    


    In the frontend I calculate this by doing a snapshot of the video element into a canvas, getting the ImageData from the canvasContext and doing my calculations after I get two or more updates.

    


    Now I need to do the exact same thing on the backend but I am not able to convert the data value from liveffmpeg.stdout.on('data', ) into an ImageData so I can use the same average function I use in the frontend.

    


    I already installed node-canvas to try to reproduce the frontend steps but to be able to draw something inside the canvas context I need an Image and no Video object can be constructed in nodejs.

    


    So, is there a way to convert the stdout data of the FFmpeg process into ImageData objects ? Or, to extract the images of each data update so I can manually draw them on the node-canvas element and get the ImageData ?

    


    Thanks !

    


    PDd : I think convert them into Uint8ClampedArray could also work since node-canvas has a function that uses that : createImageData(data: Uint8ClampedArray, width: number, height?: number) => ImageData