Recherche avancée

Médias (91)

Autres articles (72)

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

  • Problèmes fréquents

    10 mars 2010, par

    PHP et safe_mode activé
    Une des principales sources de problèmes relève de la configuration de PHP et notamment de l’activation du safe_mode
    La solution consiterait à soit désactiver le safe_mode soit placer le script dans un répertoire accessible par apache pour le site

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

Sur d’autres sites (7233)

  • Ffmpeg hls segment stop segmenting after 5300 segments

    9 avril 2019, par Hug Duino

    I’m trying to make a http video streaming using hls, ffmpeg and raspivid and I need a replay time of 1 day but after 5300 segments ffmpeg stop segmenting and continue writing the video to the 5301 segment for the end of the day (5300/5301 is an average number, +- 50 segments)
    I have plenty of storage space, my camera can record all the day. The only problem is ffmpeg who decide to stop segmenting after 5300 segments

    Thank you and sorry for my poor english ^^

    Here is my streaming script :

    base="/var/www/html/"

    set -x

    rm -rf /var/www/html/ppc/saves/live live.h264
    mkdir -p /var/www/html/ppc/saves/live

    # fifos seem to work more reliably than pipes - and the fact that the
    # fifo can be named helps ffmpeg guess the format correctly.
    mkfifo live.h264
    raspivid -a 1036 -w 1640 -h 1232 -fps 15 -t 37200000 -b 1500000 -o - | psips > live.h264 &

    # Letting the buffer fill a little seems to help ffmpeg to id the stream
    sleep 2

    # Need ffmpeg around 1.0.5 or later. The stock Debian ffmpeg won't work.
    # I'm not aware of options apart from building it from source. I have
    # Raspbian packags built from Debian Multimedia sources. Available on
    # request but I don't want to post them publicly because I haven't cross
    # compiled all of Debian Multimedia and conflicts can occur.
    ffmpeg -y -r 15 -i live.h264 -f alsa  -i default:CARD=C525 -r:a 48000 -ac 1 -af adelay=32s -c:v copy -c:a aac -b:a 128k -map 0:0 -map 1:0 -r 30 \
    -f segment \
    -segment_time 7 \
    -segment_format mpegts \
    -segment_list /var/www/html/ppc/saves/live/live.m3u8 \
    -segment_list_flags live \
    -segment_list_type m3u8 \
    -initial_offset -9 \
    -strict 2 /var/www/html/ppc/saves/live/%08d.ts < /dev/null```
  • Mix voiceover and background music with delay and volume

    6 août 2019, par oldman

    I don’t master ffmpeg but I really need to mix a voiceover with a background song.

    The voiceover must start 5 seconds after the song, and must end 5 seconds before the song ends.

    The music volume must be loud for the first 5 seconds and the last 5 seconds, but during the voiceover it should remain low.

    Searching the internet, the closest I could come to my goal was this :

    ffmpeg -i music.mp3 -i voiceover.mp3 -filter_complex "[0]asplit[a][b]; [a]atrim=duration=5,volume='1-max(0.25*(t-13),0)':eval=frame[pre]; [b]atrim=start=5,asetpts=PTS-STARTPTS[song]; [song][1]amix=inputs=2:duration=shortest:dropout_transition=0[post]; [pre][post]concat=n=2:v=0:a=1[mixed]" -map " "[mixed]" output.mp3

    Could someone help me complete this script ?

  • (ffmpeg) How to sync dshow inputs, dropping frames, and -rtbufsize [closed]

    29 juillet 2021, par Zach Fleeman

    I wrote a quick batch script to capture anything from my Elgato HD60 Pro capture card, and while it works in some ways, I don't really understand how certain parameters are affecting my capture.

    


    Whenever I run this command without the -rtbufsize 2048M -thread_queue_size 5096 params, I drop a ton of frames. I only added those params with those values because I found them on another stackoverflow thread. I wouldn't mind actually knowing what these do, and how I can fine-tune them for my script.

    


    ffmpeg.exe -y -rtbufsize 2048M -thread_queue_size 5096 -fflags +igndts ^
-f dshow -i video="Game Capture HD60 Pro":audio="Game Capture HD60 Pro Audio" ^
-filter:v "crop=1410:1080:255:0, scale=706x540" ^
-c:v libx264 -preset veryfast -b:v 1500k -pix_fmt yuv420p ^
-c:a aac ^
-f tee -map 0:v -map 0:a "%mydate%_%mytime%_capture.mp4|[f=flv]rtmp://xxx.xxx.xxx.xxx/live"


    


    In Open Broadcaster Software, my Elgato is a near-instant video feed, but this captures/streams things at a 3-ish second delay, which is okay until I work on this second command. I'm using gdigrab to capture the window from LiveSplit for my speedrunning, but I can't get the video streams to be synced up. I tried adding and modifying another -rtbufsize before the gdigrab input, but again, I'm not sure if this is what I need to do to delay the LiveSplit grab. It seems to always be 2 to 3 seconds ahead of my capture card. How can I get these inputs to be synced and react at the same time ? i.e., I start the timer in LiveSplit at the same time that I hit a button on my super nintendo.

    


    ffmpeg.exe -y -rtbufsize 750M -thread_queue_size 5096 ^
-f dshow  -i video="Game Capture HD60 Pro":audio="Game Capture HD60 Pro Audio" ^
-rtbufsize 2000M -thread_queue_size 5096 ^
-f gdigrab -r 60 -i title=LiveSplit ^
-filter_complex "[0:v][0:v]overlay=255:0 [game];[game][1:v]overlay=0:40 [v]" ^
-c:v libx264 -preset veryfast -b:v 1500k -pix_fmt yuv420p ^
-c:a aac ^
-f tee -map "[v]" -map 0:a "%mydate%_%mytime%_capture.mp4|[f=flv]rtmp://192.168.1.7/live"


    


    tl ;dr
Where should I put -rtbufsize ? What value should it be ? And how about -thread_queue_size ? Are these things that I have to specify once or multiple times for each input ? How can I get my different input sources to sync up ?

    


    p.s., I'm cropping and overlaying my Elgato inputs because my capture card does 1920x1080, but my video is most likely a 4:3-ish SNES/NES game.