Recherche avancée

Médias (0)

Mot : - Tags -/metadatas

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (99)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (8239)

  • FFmpeg : how to stream multiple http outputs

    25 octobre 2022, par Caner Demir

    I'm assigned to search for a solution to stream more than one http output in ffmpeg. I tried the code below which uses pipe, but only the last output works, other ones don't work. The solutions I tried with tee muxer also couldn't work. Any idea how can I do that ?

    


    ffmpeg -re -i sample1.mp3 -i sample2.mp3 -filter_complex amix=inputs=2 -f mp3 pipe:1 | ffmpeg -f mp3 -re -i pipe:0 -c copy -listen 1 -f mp3 http://127.0.0.1:4000 -c copy -listen 1 -f mp3 http://127.0.0.1:4010


    


    The code I tried with tee muxer is below :

    


    ffmpeg -re -i sample1.mp3 -i sample2.mp3 -filter_complex amix=inputs=2 -c:a mp3 -f tee "[listen=1:f=mp3]http://127.0.0.1:4000|[listen=1:f=mp3]http://127.0.0.1:4010"


    


    edit : shared the log with code.

    


    ffmpeg -loglevel debug -hide_banner -re -i sample1.mp3 -filter_complex asplit=2 -c:a mp3 -listen 1 -f mp3 http://127.0.0.1:4000 -c:a mp3 -listen 1 -f mp3 http://127.0.0.1:4010
Splitting the commandline.
Reading option '-loglevel' ... matched as option 'loglevel' (set logging level) with argument 'debug'.
Reading option '-hide_banner' ... matched as option 'hide_banner' (do not show program banner) with argument '1'.
Reading option '-re' ... matched as option 're' (read input at native frame rate; equivalent to -readrate 1) with argument '1'.
Reading option '-i' ... matched as input url with argument 'sample1.mp3'.
Reading option '-filter_complex' ... matched as option 'filter_complex' (create a complex filtergraph) with argument 'asplit=2'.
Reading option '-c:a' ... matched as option 'c' (codec name) with argument 'mp3'.
Reading option '-listen' ... matched as AVOption 'listen' with argument '1'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'mp3'.
Reading option 'http://127.0.0.1:4000' ... matched as output url.
Reading option '-c:a' ... matched as option 'c' (codec name) with argument 'mp3'.
Reading option '-listen' ... matched as AVOption 'listen' with argument '1'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'mp3'.
Reading option 'http://127.0.0.1:4010' ... matched as output url.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option loglevel (set logging level) with argument debug.
Applying option hide_banner (do not show program banner) with argument 1.
Applying option filter_complex (create a complex filtergraph) with argument asplit=2.
Successfully parsed a group of options.
Parsing a group of options: input url sample1.mp3.
Applying option re (read input at native frame rate; equivalent to -readrate 1) with argument 1.
Successfully parsed a group of options.
Opening an input file: sample1.mp3.
[NULL @ 00000216ab345b80] Opening 'sample1.mp3' for reading
[file @ 00000216ab346180] Setting default whitelist 'file,crypto,data'
[mp3 @ 00000216ab345b80] Format mp3 probed with size=4096 and score=51
id3v2 ver:4 flags:00 len:134
[mp3 @ 00000216ab345b80] pad 576 576
[mp3 @ 00000216ab345b80] Skipping 0 bytes of junk at 561.
[mp3 @ 00000216ab345b80] Before avformat_find_stream_info() pos: 561 bytes read:32768 seeks:0 nb_streams:1
[mp3 @ 00000216ab345b80] demuxer injecting skip 1105 / discard 0
[mp3float @ 00000216ab34f3c0] skip 1105 / discard 0 samples due to side data
[mp3float @ 00000216ab34f3c0] skip 1105/1152 samples
[mp3 @ 00000216ab345b80] All info found
[mp3 @ 00000216ab345b80] After avformat_find_stream_info() pos: 22065 bytes read:32768 seeks:0 frames:50
Input #0, mp3, from 'sample1.mp3':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2mp41
    encoder         : Lavf58.45.100
  Duration: 00:12:58.48, start: 0.025057, bitrate: 128 kb/s
  Stream #0:0, 50, 1/14112000: Audio: mp3, 44100 Hz, stereo, fltp, 128 kb/s
    Metadata:
      encoder         : Lavc58.91
Successfully opened the file.
[Parsed_asplit_0 @ 00000216ab34fc80] Setting 'outputs' to value '2'
Parsing a group of options: output url http://127.0.0.1:4000.
Applying option c:a (codec name) with argument mp3.
Applying option f (force format) with argument mp3.
Successfully parsed a group of options.
Opening an output file: http://127.0.0.1:4000.
Matched encoder 'libmp3lame' for codec 'mp3'.
    Last message repeated 1 times
[http @ 00000216ab44d980] Setting default whitelist 'http,https,tls,rtp,tcp,udp,crypto,httpproxy,data'


    


  • FFMPEG hangs when trying to receive rtp/udp stream

    22 décembre 2016, par NathanK

    I am trying to receive a udp stream encoded as h264. The exact command I am using is :

    ffmpeg -v 9 -loglevel 99 -re -i "udp://239.192.1.2:1234" outfile.h264

    The output I receive is :

    ffmpeg version 3.2.2-1 Copyright (c) 2000-2016 the FFmpeg developers
     built with gcc 6.2.1 (Debian 6.2.1-5) 20161124


    configuration: --prefix=/usr --extra-version=1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libebur128 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libiec61883 --enable-libopencv --enable-frei0r --enable-libx264 --enable-chromaprint --enable-shared
     libavutil      55. 34.100 / 55. 34.100
     libavcodec     57. 64.101 / 57. 64.101
     libavformat    57. 56.100 / 57. 56.100
     libavdevice    57.  1.100 / 57.  1.100
     libavfilter     6. 65.100 /  6. 65.100
     libavresample   3.  1.  0 /  3.  1.  0
     libswscale      4.  2.100 /  4.  2.100
     libswresample   2.  3.100 /  2.  3.100
     libpostproc    54.  1.100 / 54.  1.100
    Splitting the commandline.
    Reading option '-v' ... matched as option 'v' (set logging level) with argument '9'.
    Reading option '-loglevel' ... matched as option 'loglevel' (set logging level) with argument '99'.
    Reading option '-re' ... matched as option 're' (read input at native frame rate) with argument '1'.
    Reading option '-i' ... matched as input url with argument 'udp://@239.192.1.2:1234'.
    Reading option 'outffff.h264' ... matched as output url.
    Finished splitting the commandline.
    Parsing a group of options: global .
    Applying option v (set logging level) with argument 9.
    Successfully parsed a group of options.
    Parsing a group of options: input url udp://@239.192.1.2:1234.
    Applying option re (read input at native frame rate) with argument 1.
    Successfully parsed a group of options.
    Opening an input file: udp://@239.192.1.2:1234.
    [udp @ 0x7f6638a36280] No default whitelist set
    [udp @ 0x7f6638a36280] end receive buffer size reported is 131072
    [AVIOContext @ 0x7f6638a56840] Statistics: 0 bytes read, 0 seeks

    The problem with this is, obviously, that the ffmpeg process isn’t detecting any incoming packets on that address which causes the process to hang.
    I am, however, able to receive the stream using vlc, so I know that the packets are arriving. Here is a link to a similar problem.

    I thought, perhaps, it was a problem with reverse path filtering (as is mentioned in the link), but rp_filter is not set in /etc/sysctl.conf. I’m not quite sure what to do from here, any help is appreciated.

    Edit : I removed the @, however this changes nothing.

    The vlc command that works is :

    vlc --miface eth1 rtp://@239.192.1.2:1234
  • FFMPEG mosaic/side-by-side-compositing from simultaneous DirectShow input devices

    9 juin 2013, par timlukins

    This is what I'm trying to do :

    ffmpeg.exe -y \
    -f dshow -i video="Microsoft LifeCam Cinema" \
    -f dshow -i video="Microsoft LifeCam VX-2000" \
    -filter_complex "[0:v]pad=iw*2:ih:0[left];[left][1:v]overlay=W/2.0[fileout]" \
    -map "[fileout]" -vcodec libx264 -f flv out.flv

    Basically, I have 2 webcams and I would like to combine them into a single video file in which the frames are 2x1 in size with the frame from one camera in the left and the other on the right.

    In other words, what might be termed "mosaic-ing" or "side-by-side compositing". This is not concatenation - i.e. one file after the other (so not using the concat filter).

    I've gleamed that this use of -filter_complex to pad and then position the frames appears the prescribed way. Indeed, when I test this with files like so :

    ffmpeg.exe -y -i test1.flv -i test2.flv -filter_complex "[0:v]pad=iw*2:ih:0[left];[left][1:v]overlay=W/2.0[fileout]" -map "[fileout]" -vcodec libx264 -f flv testout.flv

    It works fine !

    With the "live" version however, both cameras seem to start (their lights come on) but the capture stalls.

    (Suspiciously like there is some DirectShow deadlock on the separate input device threads...)

    And so, I wonder is there some way to overcome this and force the two input stream's data to merge ?

    I have also tried the extended format of the dshow filter option like so as well :

    -f dshow -i video="Microsoft LifeCam Cinema":video="Microsoft LifeCam VX-2000"

    But only one camera is then selected (I suspect this option is really only to enable separate video and audio streams to be combined).

    I've also tried explicitly setting each input device to have the exact same frame size and rate with -f dshow -video_size 640x480 -framerate 30. No joy though. It still stalls once the camera is listed.

    Here is the tail end of the output (with -v debug on) :

    Finished splitting the commandline.
    Parsing a group of options: global .
    Applying option y (overwrite output files) with argument 1.
    Applying option v (set libav* logging level) with argument debug.
    Applying option filter_complex (create a complex filtergraph) with argument [0:v]pad=iw*2:ih:0[left];[left][1:v]overlay=W/2.0[fileout].
    Successfully parsed a group of options.
    Parsing a group of options: input file video=Microsoft LifeCam Cinema.
    Applying option f (force format) with argument dshow.
    Successfully parsed a group of options.
    Opening an input file: video=Microsoft LifeCam Cinema.
    [dshow @ 00000000016e79a0] All info found
    [dshow @ 00000000016e79a0] Estimating duration from bitrate, this may be inaccurate
    Input #0, dshow, from 'video=Microsoft LifeCam Cinema':
     Duration: N/A, start: 1130406.072000, bitrate: N/A
       Stream #0:0, 1, 1/10000000: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 640x480, 333333/10000000, 30 tbr, 10000k tbn, 30 tbc
    Successfully opened the file.
    Parsing a group of options: input file video=Microsoft LifeCam VX-2000.
    Applying option f (force format) with argument dshow.
    Successfully parsed a group of options.
    Opening an input file: video=Microsoft LifeCam VX-2000.
    [dshow @ 00000000016e79a0] real-time buffer 101% full! frame dropped!

    EDIT Further details trying to fix within the code...*

    I've always understood from past Windows DirectShow work that multiple calls to CoInitialize() on the same thread is bad. See here. Perhaps I've misunderstood how FFMPEG is multi-threaded (i.e. if each input device is on it's own thread) but I thought to just try regulating the call with a guard variable (a static int com_init = 0; - this should probably be mutex-ed...).

    e.g. in libavdevice/dshow.c method dshow_read_header

    889    if (com_init==0)
    890     CoInitialize(0);
    891    com_init++

    And similar for dshow_read_close

    170    com_init--;
    171    if (com_init==0)
    172     CoUninitialize()

    Sadly, this doesn't work. The first camera starts but the second doesn't and the error is :

    [dshow @ 0000000000301760] Could not set video options
    video=Microsoft LifeCam VX-2000: Input/output error

    (Worth a shot. Looks like each input device is indeed on the same thread...)