Recherche avancée

Médias (1)

Mot : - Tags -/Rennes

Autres articles (26)

  • Initialisation de MediaSPIP (préconfiguration)

    20 février 2010, par

    Lors de l’installation de MediaSPIP, celui-ci est préconfiguré pour les usages les plus fréquents.
    Cette préconfiguration est réalisée par un plugin activé par défaut et non désactivable appelé MediaSPIP Init.
    Ce plugin sert à préconfigurer de manière correcte chaque instance de MediaSPIP. Il doit donc être placé dans le dossier plugins-dist/ du site ou de la ferme pour être installé par défaut avant de pouvoir utiliser le site.
    Dans un premier temps il active ou désactive des options de SPIP qui ne le (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 is the first MediaSPIP stable release.
    Its official release date is June 21, 2013 and is announced here.
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

Sur d’autres sites (6057)

  • Capture camera + mic and encode to h264/aac on macOS

    16 décembre 2018, par Flock Dawson

    I’m having trouble capturing and encoding audio+video on-the-fly on macOS.

    I tried two options :

    1. ffmpeg

      ffmpeg -threads 0 -f avfoundation -s 1920x1080 -framerate 25 -I 0:0 -async 441 -c:v libx264 -preset medium -pix_fmt yuv420p -crf 22 -c:a libfdk_aac -aq 95 -y
    2. gstreamer

      gst-launch-1.0 -ve avfvideosrc device-index=0 ! video/x-raw,width=1920,height=1080,framerate=25/1 ! vtenc_h264 ! queue ! mp4mux name=mux ! filesink location=out.mp4  osxaudiosrc device=0 ! audio/x-raw ! faac midside=false ! queue ! mux.

    The ffmpeg option works, but only for lower resolutions. With higher resolutions, the Mac mini (2018 gen) can’t do the heavy lifting. I think because I installed ffmpeg with brew, so it wasn’t compiled on my machine, meaning it doesn’t use the h264 hardware encoder in the Mac ?

    The gstreamer option works as well, but there’s a slight audio/video sync issue (audio is 100ms ahead of the video). I can’t seem to add delay to the GStreamer queue (it ignores it) :

    queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=100000000

    Anyone who has any experience with this ? Thanks !

  • avcodec/speexdec : Consider mode in frame size check

    26 décembre 2021, par Michael Niedermayer
    avcodec/speexdec : Consider mode in frame size check
    

    No speex samples with non default frame sizes are known (to me)
    the official speexenc seems to only generate the 3 default ones.
    Thus it may be that the fuzzer samples where the first non default
    values encountered by the decoder.
    Possibly the "<" should be " !="

    Fixes : out of array access
    Fixes : 42821/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_SPEEX_fuzzer-5640695772217344

    Found-by : continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
    Signed-off-by : Michael Niedermayer <michael@niedermayer.cc>

    • [DH] libavcodec/speexdec.c
  • three ways to achieve an H264 file

    18 mars 2016, par Kindermann

    here I have three ways to get an H264 file, like all forensic scientists, I am very curious about the differences between them :

    1.

    ffmpeg -i video.mp4 video.h264

    2.

    ffmpeg -i video.mp4 -vcodec copy -an -f h264 video.h264

    3. Using the example "demuxing_decoding.c" provided on the ffmpeg official website :
    http://ffmpeg.org/doxygen/trunk/demuxing_decoding_8c-example.html

    Obviously, the first one does the transformation, and the second one does the demuxing. They render different H264 files which however have similar file sizes(in my case, it’s about say 24 MB). Suprisingly, the third one, which is also supposed to do the demuxing job, renders an H264 file with 8.4 GB ! Why ?

    What I wondered is really, how the interiors of these three methods work ?(The third one is already in source code, therefore it’s quite easy to have an insight) What about the first two commands ? What APIs are called when executing these two commands and how those APIs are called(namely, in what kind of sequences they are called) and things like that.
    One thing that is also important to me is, i have no idea how I can trace the execution routines of ffmpeg command lines. I want to see what’s going on behind ffmpeg commands in source code version. Is it possible ?

    I appreciate any comment.