Recherche avancée

Médias (0)

Mot : - Tags -/interaction

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (75)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (9753)

  • python multiple threads redirecting stdout

    4 septembre 2022, par Marcin Kamienski

    I'm building an icecast2 radio station which will restream existing stations in lower quality. This program will generate multiple FFmpeg processes restreaming 24/7. For troubleshooting purposes, I would like to have an output of every FFmpeg process redirected to the separate file.

    


    import ffmpeg, csv
from threading import Thread

def run(name, mount, source):
icecast = "icecast://"+ICECAST2_USER+":"+ICECAST2_PASS+"@localhost:"+ICECAST2_PORT+"/"+mount
stream = (
        ffmpeg
        .input(source)
        .output(
            icecast,
            audio_bitrate=BITRATE, sample_rate=SAMPLE_RATE, format=FORMAT, acodec=CODEC,
            reconnect="1", reconnect_streamed="1", reconnect_at_eof="1", reconnect_delay_max="120",
            ice_name=name, ice_genre=source
            )
        )
return stream

with open('stations.csv', mode='r') as data:
for station in csv.DictReader(data):
    stream = run(station['name'], station['mount'], station['url'])
    thread = Thread(target=stream.run)
    thread.start()


    


    As I understand I can't redirect stdout of each thread separately, I also can't use ffmpeg reporting which is only configured by an environment variable. Do I have any other options ?

    


  • How to repackage mov/mp4 file into HLS playlist with multiple audio streams

    9 septembre 2022, par Martyna

    I'm trying to convert some videos (in the different formats, e.g., mp4, mov) which contain one video stream and multiple audio streams into one HLS playlist with multiple audio streams (treated as languages) and only one video stream.

    


    I already browsed a lot of stack threads and tried many different approaches, but I was only able to find answers for creating different HLS playlists with different audios.

    


    Sample scenario which I have to handle :

    


      

    1. I have one mov file, containing one video stream and 2 audio streams.
    2. 


    3. I need to create an HLS playlist from this mov file, which will use this one video stream, but would encode these 2 audio streams as language tracks (so let's say it's ENG and FRA)
    4. 


    5. Such prepared HLS can be later streamed in the player, and the end user would have a possibility to switch between audio tracks while watching the clip.
    6. 


    


    What I was able to achieve is to create multiple hls playlists each with different audio track.

    


    ffmpeg -i "file_name.mp4" \
-map 0:v -map 0:a -c:v copy -c:a copy -start_number 0 \
-f hls \
-hls_time 10 \
-hls_playlist_type vod \
-hls_list_size 0 \
-master_pl_name master_playlist_name.m3u8 \
-var_stream_map "v:0,agroup:groupname a:0,agroup:groupname,language:ENG a:1,agroup:groupname" file_name_%v_.m3u8


    


    My biggest issue is that I'm having hard time understanding how -map and -var_stream_map options should be used in my case, or if they even should be used in this scenario.

    


    An example of the result of ffmpeg -i command on the original mov file which should be converted into HLS.

    


      Stream #0:0[0x1](eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1920x1080, 8786 kb/s, 25 fps, 25 tbr, 12800 tbn (default)
    Metadata:
      handler_name    : Apple Video Media Handler
      vendor_id       : [0][0][0][0]
      timecode        : 00:00:56:05
  Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 127 kb/s (default)
    Metadata:
      handler_name    : SoundHandler
      vendor_id       : [0][0][0][0]
  Stream #0:2[0x3](und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 127 kb/s
    Metadata:
      handler_name    : SoundHandler
      vendor_id       : [0][0][0][0]


    


    I also checked this blogpost and I would like to achieve this exact effect, but with video, not with audio.

    


    


    For example, -var_stream_map "v:0,a:0 v:1,a:0 v:2,a:0" implies that
the audio stream denoted by a:0 is used in all three video renditions.

    


    


  • How do I encode a video stream to multiple output formats in parallel with ffmpeg ?

    21 juillet 2022, par rgov

    I would like to use one FFmpeg process to receive video input and then pass that video to multiple separate encoder processes in order to efficiently make use of all available CPU cores.

    


    The FFmpeg wiki article on Creating multiple outputs has this note from @rogerdpack :

    


    


    Outputting and re encoding multiple times in the same FFmpeg process will typically slow down to the "slowest encoder" in your list. Some encoders (like libx264) perform their encoding "threaded and in the background" so they will effectively allow for parallel encodings, however audio encoding may be serial and become the bottleneck, etc. It seems that if you do have any encodings that are serial, it will be treated as "real serial" by FFmpeg and thus your FFmpeg may not use all available cores. One work around to this is to use multiple ffmpeg instances running in parallel, or possible piping from one ffmpeg to another to "do the second encoding" etc. Or if you can avoid the limiting encoder (ex : using a different faster one [ex : raw format] or just doing a raw stream copy) that might help.

    


    


    The article has an example of using a tee pseudo-muxer, but it uses "a single instance of FFmpeg. The example of piping from one instance of FFmpeg to another only allows one encoder process.

    


    A 10-year-old version of the same article mentions using the tee process but it was subsequently deleted :

    


    


    Another option is to output from FFmpeg to "-" then to pipe that to a "tee" command, which can send it to multiple other processes, for instance 2 different other ffmpeg processes for encoding (this may save time, as if you do different encodings, and do the encoding in 2 different simultaneous processes, it might do encoding more in parallel than elsewise). Un benchmarked, however.

    


    


    Along the same lines : Some of the example commands use the mpegts to encapsulate frames before passing them between processes. Is there any constraint that this applies to the codecs or types of metadata that can be sent to downstream processes ?