Recherche avancée

Médias (91)

Autres articles (58)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (8190)

  • ffmpeg in:h264 out:yuv to stdout - data format ?

    27 février 2019, par Petr

    I am (like many) trying to get a continuous series of still images out of the camera attached to a raspberry pi. I want to do this in java for all the usual reasons, and am using a Runtime exec command to pipe the output of raspivid to the following ffmpeg command, and then collecting the result via stdout --- note xxx.h264 is a test file generated by the camera that does not play because there is no container, but I am getting images out so half good.

    ffmpeg -i xxx.h264 -vcodec rawvideo -r 2 -pix_fmt yuv420p -f nut -

    I have some code displaying the frames, but they "march" across the display area from left to right, and there appears to be a growing amount of rubbish across the top of the images. I have looked at the bytes it outputs by running the same command and redirecting it into a file, then using vi/xxd and find that there is headder material ("nut/multimedia container ...").

    I am guessing that there is more metadata inserted by my ffmpeg command, that I am failing to remove when processing the raw yuv420p data as described here : https://en.wikipedia.org/wiki/YUV#Y%E2%80%B2UV420sp_%28NV21%29_to_RGB_conversion_%28Android%29

    For the life of me I cannot find the nut documentation anywhere in a readable format and anyway, it seems that is not what I should be looking for. Any pointers as to how I can recognise the frame boundaries in my byte stream ?

  • hevc_nvenc HQ settings collection

    28 décembre 2017, par Luca Malavasi

    I would like to collect the reasonably best HQ settings related to codec "hevc_nvenc" for FFmpeg application.
    I saw that for 10xx series it should take from 15 min to 30 minutes for a Blu-Ray processing,
    so it would not a big deal try to implement best settings possible (AKA Quality vs disk size).

    Here are my test-sample experience, considering that unluckily this codec doesn’t support -crf option

    -1-SD-h264=
    ffmpeg -i input -gpu 0 -vcodec hevc_nvenc -aspect 16:9 -strict -2 -b:v 2000k -minrate 2000k -maxrate 2000k -tier high -profile:2 -preset:llhq -2pass 1 -acodec copy output

    -2-HD-h264=
    ffmpeg -i input -gpu 0 -vcodec -b:v 9000k -minrate 9000k -maxrate 9000k -tier high -profile:2 -preset:llhq -2pass 1 -acodec copy output

    -3-HD-h265=
    ffmpeg -i input -vcodec hevc_nvenc -b:v 5000k -minrate 5000k -maxrate 5000k -tier high -profile:2 -preset:llhq -2pass 1 -acodec copy output

    Is anybody suggesting some more improving setting for 1,2,3 options ?
    Thanks to all contributors.
    Cheers
    Luca

  • ffmpeg setpts apply uniform offset

    19 novembre 2018, par Vinay

    I have a series of videos that I’m converting from .mov to .ts and then create an HLS playlist for. I’m able to figure out the ending pts for both the audio and video streams of any given video and am trying apply that ending (cumulative) offset when converting later videos in the sequence. For instance :

    1. convert 0.mov -> 0.ts
    2. Get ending pts of audio stream and video stream for 0.ts.
    3. Apply ending video/audio stream as a setpts filter for converting 1.mov -> 1.ts
    4. Repeat

    As an example, I’m using the following command for the second video in the sequence :

    ffmpeg -y -i 1.mov \
       -filter:a "asetpts=PTS-STARTPTS+367534" \
       -filter:v "setpts=PTS-STARTPTS+363000" \
       -codec:v libx264 -crf 18 -preset veryfast \
       -acodec aac -muxdelay 0 1.ts

    The offset pts 367534 (audio) and 363000 were grabbed from 0.ts converted before, however, when I do this, 1.ts ends up having a duration of 58s and an offset of 8.31s.

    This is how I’m grabbing the ending pts offset of 0.ts (inspired by https://stackoverflow.com/a/53348545/696130) :

    ffprobe -v 0 \
       -show_entries packet=pts,duration -of compact=p=0:nk=1 \
       -select_streams a \
       0.ts | | sed \'/^\s*$/d\' | tail -1