Recherche avancée

Médias (0)

Mot : - Tags -/serveur

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (102)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

Sur d’autres sites (6836)

  • Insert still frames into H.264 video stream

    7 juillet 2021, par Bassinator

    I'm building an application that receives video packets which are encoded as H.264 from Microsoft Teams - I get one packet for each frame of video. Specifications of the packet contents are given here. For every packet I receive, I write the byte contents of the data[] buffer to a file. This resulting file is a playable H.264 encoded video.

    


    I'm trying to handle the scenario of syncing the audio and video streams from a Teams meeting, and inserting a still frame PNG as a "filler" when nobody has their camera on.

    


    I used the following FFMPEG command to generate n number of seconds of H.264 video from the filler frame :

    


    ffmpeg -loop 1 -i video_filler_frame.png -framerate 30 -c:v libx264 -t 2 -vf scale=1920:1080 C:\Code\temp\out.mp4


    


    This generates an MP4 file (H.264 encoded) - as a test in my code, I tried to read the contents of that generated file as a byte array and append them to the video file.

    


    However, this doesn't appear to work. I'm guessing this is because there is some kind of header or other metadata that prevents us from doing the simple solution of just appending the bytes of the next frame.

    


    My question is, how can I achieve what I am trying to do ? I'd like to splice in n number of frames as I am writing the individual packet contents to the file. In other words, for example, consider the following sequence :

    


      

    • Write packets of video to the file
    • 


    • My code determines that filler frames are needed at some point in this process

        

      • Insert needed number of filler frames to the file
      • 


      


    • 


    • Continue writing packets of video as they come in
    • 


    


  • FFmpeg create video from images, insert images as frame from timestamp ?

    29 août 2018, par Hunter_AP

    So I’m trying to extract every frame of a video, then use ffprobe to see when each frame is played within a video, then be able to stitch that video back together using those extracted images and ffprobe output.

    Right now, I have this batch file :

    for %%a in (*.mp4) do (
       mkdir "%%~na_images" > NUL
       ffmpeg.exe -hide_banner -i "%%a"  -t 100 "%%~na_images\image-%%d.png"
       ffprobe.exe "%%a" -hide_banner -show_entries frame=coded_picture_number,best_effort_timestamp_time -of csv > "%%~na_frames.txt"
    )

    First, a directory is made for the images.
    Then ffmpeg extracts all the frames of the video to individual PNG files, which are numbered appropriately.
    Lastly, ffprobe sees when each frame is first shown within that video (IE : frame 1 is shown at 0 seconds, but at say 60fps then frame 2 is played at 0.016667 seconds in the video). The output looks like this :

    frame,0.000000,0
    frame,0.000000
    frame,0.017000,1
    frame,0.023220

    Where the first number (IE 0.17000 is the time the second frame appears) and the 2nd number is the frame number.
    Now my problem is using ffmpeg to take each frame and place it in the proper time within the video. I can do this using another language (probably Python), but my best guess is to make a loop to iterate through the ffprobe output file, get the frame time and image number, place that frame at the points that it appears, then move on to the next frame and time placement. Looking at the frame data I used as an example above, it ’d be something like this :

    for line in lines:
       mySplit = line.split(',')
       # Get image number 0 and insert at time 0.000000

    This is the part that I’m not sure how to do in a coding sense. I can read in and parse the lines of the ffprobe output text file, but I have no idea how to insert the frames at certain points in a video using ffmpeg or similar solutions.

  • aarch64 : vp9itxfm : Skip empty slices in the first pass of idct_idct 16x16 and 32x32

    9 janvier 2017, par Martin Storsjö
    aarch64 : vp9itxfm : Skip empty slices in the first pass of idct_idct 16x16 and 32x32
    

    This work is sponsored by, and copyright, Google.

    Previously all subpartitions except the eob=1 (DC) case ran with
    the same runtime :

    vp9_inv_dct_dct_16x16_sub16_add_neon : 1373.2
    vp9_inv_dct_dct_32x32_sub32_add_neon : 8089.0

    By skipping individual 8x16 or 8x32 pixel slices in the first pass,
    we reduce the runtime of these functions like this :

    vp9_inv_dct_dct_16x16_sub1_add_neon : 235.3
    vp9_inv_dct_dct_16x16_sub2_add_neon : 1036.7
    vp9_inv_dct_dct_16x16_sub4_add_neon : 1036.7
    vp9_inv_dct_dct_16x16_sub8_add_neon : 1036.7
    vp9_inv_dct_dct_16x16_sub12_add_neon : 1372.1
    vp9_inv_dct_dct_16x16_sub16_add_neon : 1372.1
    vp9_inv_dct_dct_32x32_sub1_add_neon : 555.1
    vp9_inv_dct_dct_32x32_sub2_add_neon : 5190.2
    vp9_inv_dct_dct_32x32_sub4_add_neon : 5180.0
    vp9_inv_dct_dct_32x32_sub8_add_neon : 5183.1
    vp9_inv_dct_dct_32x32_sub12_add_neon : 6161.5
    vp9_inv_dct_dct_32x32_sub16_add_neon : 6155.5
    vp9_inv_dct_dct_32x32_sub20_add_neon : 7136.3
    vp9_inv_dct_dct_32x32_sub24_add_neon : 7128.4
    vp9_inv_dct_dct_32x32_sub28_add_neon : 8098.9
    vp9_inv_dct_dct_32x32_sub32_add_neon : 8098.8

    I.e. in general a very minor overhead for the full subpartition case due
    to the additional cmps, but a significant speedup for the cases when we
    only need to process a small part of the actual input data.

    This is cherrypicked from libav commits
    cad42fadcd2c2ae1b3676bb398844a1f521a2d7b and
    a0c443a3980dc22eb02b067ac4cb9ffa2f9b04d2.

    Signed-off-by : Michael Niedermayer <michael@niedermayer.cc>

    • [DH] libavcodec/aarch64/vp9itxfm_neon.S