Recherche avancée

Médias (1)

Mot : - Tags -/artwork

Autres articles (106)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Formulaire personnalisable

    21 juin 2013, par

    Cette page présente les champs disponibles dans le formulaire de publication d’un média et il indique les différents champs qu’on peut ajouter. Formulaire de création d’un Media
    Dans le cas d’un document de type média, les champs proposés par défaut sont : Texte Activer/Désactiver le forum ( on peut désactiver l’invite au commentaire pour chaque article ) Licence Ajout/suppression d’auteurs Tags
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire. (...)

  • Qu’est ce qu’un masque de formulaire

    13 juin 2013, par

    Un masque de formulaire consiste en la personnalisation du formulaire de mise en ligne des médias, rubriques, actualités, éditoriaux et liens vers des sites.
    Chaque formulaire de publication d’objet peut donc être personnalisé.
    Pour accéder à la personnalisation des champs de formulaires, il est nécessaire d’aller dans l’administration de votre MediaSPIP puis de sélectionner "Configuration des masques de formulaires".
    Sélectionnez ensuite le formulaire à modifier en cliquant sur sont type d’objet. (...)

Sur d’autres sites (6358)

  • FFMPEG video editing application. Need time and date stamp burned into video

    11 mai 2022, par Jacob

    I am developing an application for video editing. The main component of this application is to produce a single video file from several video files captured from a camcorder with the time and date stamp displayed on the final rendered video, much like the final product from a security camera. I have figured out, by using FFMPEG, how to burn the date and time into the video with a .SRT file as well as with DrawText like the following :

    


    ffmpeg -y -i video.mp4 -vf “drawtext=fontfile=roboto.ttf:fontsize=12:fontcolor=yellow:text='%{pts\:localtime\:1575526882\:%A, %d, %B %Y %I\\\:%M\\\:%S %p}'" -preset ultrafast -f mp4 output_new.mp4    


    


    I would rather use the DrawText method so the user does not have to wait longer while creating the .SRT files. I am new to FFMPEG and I find their documentation very confusing. I guess I am hoping there is someone out there who has experience with it.

    


    Everything seems to work when I pass in the date created meta data from the video file and drawtext just does its thing. The problem is my application allows for editing of the video. I do this, for lack of better solution, by allowing the user to select beginning and ending frames they do not want, from the UI and then the code simply deletes the frames from the directory where they were split and saved. I then use FFMPEG to iterate through the directory and combine the remaining frames to make a video file.

    


    This approach starts the time and date from the date created metadata ; however, cutting the frames out of the video will make the DT stamp inaccurate, due to the missing frames.

    


    Is there any way to tell FFMPEG to burn in the date and time from date/time retrieved from each individual frame ? I appreciate any advice that you may have.

    


  • How to make WebM screen recording chunks independently processable for audio with FFmpeg ?

    3 décembre 2024, par Dinesh Kumar

    I am streaming screen recordings from the browser into 5-second WebM chunks using the MediaRecorder API. The first chunk (root chunk) is independently processable because it contains the necessary EBML headers and metadata. However, subsequent chunks are not independently processable, as they lack the required metadata, which prevents me from extracting audio independently from them.

    


    I am unable to extract audio independently from the individual chunks using FFmpeg due to missing headers, resulting in errors like EBML header parsing failed. The first chunk works fine on its own, but the subsequent chunks also need to be processed independently for audio extraction.

    


    I am looking for a solution using FFmpeg to fix these chunks so that I can extract audio independently from each chunk.

    


      

    1. Is there a way to repair these chunks post-recording with FFmpeg to include the missing metadata and headers, making them independently processable for audio extraction ?
    2. 


    3. Can FFmpeg reinitialize the EBML headers in each chunk, or is there a command that can add the metadata from the first chunk to subsequent chunks to allow for independent audio extraction ?
    4. 


    


    Additionally, should I consider any changes in the MediaRecorder API to ensure that the chunks are properly formatted for independent processing ? The goal is to make each WebM chunk fully independent, allowing me to extract audio independently from each chunk.

    


  • FFMPEG concat leaves audio gapes between clips

    14 novembre 2022, par GotCubes

    I'm writing a python script that uses subprocess to invoke FFMPEG, not using pyffmpeg.

    



    My script generates a variable number of MP4 files using the AAC audio codec, and concatenates them together using FFMPEG. Here is how I'm constructing each clip :

    



    ffmpeg -loop 1 -i image.jpg -i recording.mp3 -tune stillimage -c:a aac -b:a 256k -shortest clip.mp4


    



    The command I'm using to concatenate them is :

    



    ffmpeg -f concat -i clip_names.txt -c copy video_raw.mp4


    



    I then take that resulting video, and mix a looping audio track over it, and adjust the volume. (Sorry for the awful formatting)

    



    ffmpeg -i video_raw -filter_complex
                 "amovie=Tracks/Breaktime.mp3:loop=0,
                  volume=0.1,
                  asetpts=N/SR/TB[aud];
                  [0:a][aud]amix[a]"
-map 0:v -map [a] -b:a 256k -shortest final_video.mp4


    



    These commands seem to work as I intend them to. When I play the resulting MP4 from my local machine, everything plays without issue.

    



    However, I uploaded the video to YouTube, and ran into issues. When the video is played from YouTube, there is about a second of silence at every timestamp where two clips were concatenated, before the next clip begins. I've tried this from Chrome, IE, and Firefox, all with the same issues.

    



    Based on what I've looked into so far, I think it could be an issue with how the priming samples of each individual clip are handled. I'm not obligated to keep using MP4 or AAC, so if using a different audio/video codec would work better, feel free to suggest !

    



    Is there some type of manipulation I can do in FFMPEG to get rid of the priming samples, or somehow process them differently ? In the end, I'm looking for each clip to play back to back without the delay that the concat operation seems to insert. Thank you !