
Recherche avancée
Autres articles (98)
-
Formulaire personnalisable
21 juin 2013, parCette page présente les champs disponibles dans le formulaire de publication d’un média et il indique les différents champs qu’on peut ajouter. Formulaire de création d’un Media
Dans le cas d’un document de type média, les champs proposés par défaut sont : Texte Activer/Désactiver le forum ( on peut désactiver l’invite au commentaire pour chaque article ) Licence Ajout/suppression d’auteurs Tags
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire. (...) -
Qu’est ce qu’un masque de formulaire
13 juin 2013, parUn masque de formulaire consiste en la personnalisation du formulaire de mise en ligne des médias, rubriques, actualités, éditoriaux et liens vers des sites.
Chaque formulaire de publication d’objet peut donc être personnalisé.
Pour accéder à la personnalisation des champs de formulaires, il est nécessaire d’aller dans l’administration de votre MediaSPIP puis de sélectionner "Configuration des masques de formulaires".
Sélectionnez ensuite le formulaire à modifier en cliquant sur sont type d’objet. (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.
Sur d’autres sites (6142)
-
FFMPEG audio/video livestream with html5 dashjs player : time is not sync
14 avril 2017, par Dániel KisI am going to create a live video & audio stream with ffmeg with libvpx-vp9 codec, and I want to view it from html5 browser with dashjs. But when the dashjs opens the video it seatches for chunks than not yet exits. It seems that the chunk counter is running in double speed in client side.
Here is how I create the stream :
VP9_LIVE_PARAMS="-speed 6 -tile-columns 4 -frame-parallel 1 -threads 8 -static-thresh 0 -max-intra-rate 300 -deadline realtime -lag-in-frames 0 -error-resilient 1"
TARGET_PATH="/var/www/html/live/stream"
ffmpeg \
-thread_queue_size 8192 \
-f v4l2 -input_format mjpeg -r 30 -s 640x360 -i /dev/video0 \
-thread_queue_size 16384 \
-f alsa -ar 44100 -ac 2 -i hw:1,0 \
-map 0:0 \
-pix_fmt yuv422p \
-c:v libvpx-vp9 \
-s 640x360 -keyint_min 60 -g 60 ${VP9_LIVE_PARAMS} \
-b:v 300k \
-f webm_chunk \
-header "${TARGET_PATH}/glass_360.hdr" \
-chunk_start_index 1 \
"${TARGET_PATH}/glass_360_%d.chk" \
-map 1:0 \
-c:a libvorbis \
-b:a 64k -ar 44100 \
-f webm_chunk \
-audio_chunk_duration 2000 \
-header "${TARGET_PATH}/glass_171.hdr" \
-chunk_start_index 1 \
"${TARGET_PATH}/glass_171_%d.chk" \some seconds later I call the ffmpeg again to create manifest file :
ffmpeg \
-analyzeduration 1000\
-f webm_dash_manifest -live 1 \
-i "${TARGET_PATH}/glass_360.hdr" \
-f webm_dash_manifest -live 1 \
-i "${TARGET_PATH}/glass_171.hdr" \
-c copy \
-map 0 -map 1 \
-f webm_dash_manifest -live 1 \
-adaptation_sets "id=0,streams=0 id=1,streams=1" \
-chunk_start_index 1 \
-chunk_duration_ms 2000 \
-time_shift_buffer_depth 7200 \
-minimum_update_period 7200 \
"${TARGET_PATH}/glass_live_manifest.mpd"My server’s clock is synced to "http://time.akamai.com/?iso"
HTML5 player :
<code class="echappe-js"><script src="https://cdn.dashjs.org/latest/dash.all.min.js"></script>Live broadcast<script><br />
var url = "glass_live_manifest.mpd";<br />
var player = dashjs.MediaPlayer().create();<br />
player.initialize(document.querySelector("#videoPlayer"), url, true);<br />
<br />
</script> -
FFMPEG with dashjs
14 avril 2017, par Dániel KisI am going to create a live video & audio stream with ffmeg with libvpx-vp9 codec, and I want to view it from html5 browser with dashjs. But when the dashjs opens the video it seatches for chunks than not yet exits.
Here is how I create the stream :
VP9_LIVE_PARAMS="-speed 6 -tile-columns 4 -frame-parallel 1 -threads 8 -static-thresh 0 -max-intra-rate 300 -deadline realtime -lag-in-frames 0 -error-resilient 1"
TARGET_PATH="/var/www/html/live/stream"
ffmpeg \
-thread_queue_size 8192 \
-f v4l2 -input_format mjpeg -r 30 -s 640x360 -i /dev/video0 \
-thread_queue_size 16384 \
-f alsa -ar 44100 -ac 2 -i hw:1,0 \
-map 0:0 \
-pix_fmt yuv422p \
-c:v libvpx-vp9 \
-s 640x360 -keyint_min 60 -g 60 ${VP9_LIVE_PARAMS} \
-b:v 300k \
-f webm_chunk \
-header "${TARGET_PATH}/glass_360.hdr" \
-chunk_start_index 1 \
"${TARGET_PATH}/glass_360_%d.chk" \
-map 1:0 \
-c:a libvorbis \
-b:a 64k -ar 44100 \
-f webm_chunk \
-audio_chunk_duration 2000 \
-header "${TARGET_PATH}/glass_171.hdr" \
-chunk_start_index 1 \
"${TARGET_PATH}/glass_171_%d.chk" \some seconds later I call the ffmpeg again to create manifest file :
ffmpeg \
-analyzeduration 1000\
-f webm_dash_manifest -live 1 \
-i "${TARGET_PATH}/glass_360.hdr" \
-f webm_dash_manifest -live 1 \
-i "${TARGET_PATH}/glass_171.hdr" \
-c copy \
-map 0 -map 1 \
-f webm_dash_manifest -live 1 \
-adaptation_sets "id=0,streams=0 id=1,streams=1" \
-chunk_start_index 1 \
-chunk_duration_ms 2000 \
-time_shift_buffer_depth 7200 \
-minimum_update_period 7200 \
"${TARGET_PATH}/glass_live_manifest.mpd"My server’s clock is synced to "http://time.akamai.com/?iso"
HTML5 player :
<code class="echappe-js"><script src="https://cdn.dashjs.org/latest/dash.all.min.js"></script>Live broadcast<script><br />
var url = "glass_live_manifest.mpd";<br />
var player = dashjs.MediaPlayer().create();<br />
player.initialize(document.querySelector("#videoPlayer"), url, true);<br />
<br />
</script> -
FFmpeg and video4linux2 parameters - how to capture still images faster ?
13 août 2021, par mcgregor94086Problem Summary


I have built an 18-camera array of USB webcams, attached to a Raspberry Pi 400 as the controller. My Python 3.8 code for capturing an image from each webcam is slow, and I am trying to find ways to speed it up.


The FFMPEG and video4linux2 command line options are confusing to me, so I'm not sure if the delays are due to my poor choice of parameters, and a better set of options would solve the problem.


The Goal


I am trying to capture one image from each camera as quickly as possible.


I am using FFMPEG and video4linux2 command line options to capture each image within a loop of all the cameras as shown below.


Expected results


I just want a single frame from each camera. The frame rate is 30 fps, so I was expecting that capture time would be on the order of 1/30th to 1/10th of a second worst case. But the performance timer is telling me that each capture is taking 2-3 seconds.


Additionally, I don't really understand the ffmpeg output, but this output worries me :


frame= 0 fps=0.0 q=0.0 size=N/A time=00:00:00.00 bitrate=N/A speed= 0x 
frame= 0 fps=0.0 q=0.0 size=N/A time=00:00:00.00 bitrate=N/A speed= 0x 
frame= 0 fps=0.0 q=0.0 size=N/A time=00:00:00.00 bitrate=N/A speed= 0x 
frame= 1 fps=0.5 q=8.3 Lsize=N/A time=00:00:00.06 bitrate=N/A speed=0.0318x 
video:149kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing 



I don't understand why the "frame=" line is repeated 4 times. And in the 4th repitition, the fps says 0.5, which I would interpret as one frame every 2 seconds not the 30FPS that I specified.


Specific Questions :


Can anyone explain to me what this ffmpeg output means, and why it is taking 2 seconds per image captured, and not closer to 1/30th of a second ?


Can anyone explain to me how to capture the images in less time per capture ?


should I be spawning a separate thread for each ffmpeg call, so they run asynchronously, instead of serially ? Or would that not really save time in practice ?


Actual results


Input #0, video4linux2,v4l2, from '/dev/video0':
 Duration: N/A, start: 6004.168748, bitrate: N/A
 Stream #0:0: Video: mjpeg, yuvj422p(pc, bt470bg/unknown/unknown), 1920x1080, 30 fps, 30 tbr, 1000k tbn, 1000k tbc
Stream mapping:
 Stream #0:0 -> #0:0 (mjpeg (native) -> mjpeg (native))
Press [q] to stop, [?] for help
Output #0, image2, to '/tmp/video1.jpg':
 Metadata:
 encoder : Lavf58.20.100
 Stream #0:0: Video: mjpeg, yuvj422p(pc), 1920x1080, q=2-31, 200 kb/s, 30 fps, 30 tbn, 30 tbc
 Metadata:
 encoder : Lavc58.35.100 mjpeg
 Side data:
 cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
frame= 0 fps=0.0 q=0.0 size=N/A time=00:00:00.00 bitrate=N/A speed= 0x 
frame= 0 fps=0.0 q=0.0 size=N/A time=00:00:00.00 bitrate=N/A speed= 0x 
frame= 0 fps=0.0 q=0.0 size=N/A time=00:00:00.00 bitrate=N/A speed= 0x 
frame= 1 fps=0.5 q=8.3 Lsize=N/A time=00:00:00.06 bitrate=N/A speed=0.0318x 
video:149kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown

Captured /dev/video0 image in: 3 seconds
Input #0, video4linux2,v4l2, from '/dev/video2':
 Duration: N/A, start: 6007.240871, bitrate: N/A
 Stream #0:0: Video: mjpeg, yuvj422p(pc, bt470bg/unknown/unknown), 1920x1080, 30 fps, 30 tbr, 1000k tbn, 1000k tbc
Stream mapping:
 Stream #0:0 -> #0:0 (mjpeg (native) -> mjpeg (native))
Press [q] to stop, [?] for help
Output #0, image2, to '/tmp/video2.jpg':
 Metadata:
 encoder : Lavf58.20.100
 Stream #0:0: Video: mjpeg, yuvj422p(pc), 1920x1080, q=2-31, 200 kb/s, 30 fps, 30 tbn, 30 tbc
 Metadata:
 encoder : Lavc58.35.100 mjpeg
 Side data:
 cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
frame= 0 fps=0.0 q=0.0 size=N/A time=00:00:00.00 bitrate=N/A speed= 0x 
frame= 0 fps=0.0 q=0.0 size=N/A time=00:00:00.00 bitrate=N/A speed= 0x 
frame= 0 fps=0.0 q=0.0 size=N/A time=00:00:00.00 bitrate=N/A speed= 0x 
frame= 1 fps=0.5 q=8.3 Lsize=N/A time=00:00:00.06 bitrate=N/A speed=0.0318x 
video:133kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown

Captured /dev/video2 image in: 3 seconds
...



The code :


list_of_camera_ids = ["/dev/video1","/dev/video2", "/dev/video3", "/dev/video4",
 "/dev/video5","/dev/video6", "/dev/video7", "/dev/video8",
 "/dev/video9","/dev/video10", "/dev/video11", "/dev/video12",
 "/dev/video13","/dev/video14", "/dev/video15", "/dev/video16",
 "/dev/video17","/dev/video18"
 ]
for this_camera_id in list_of_camera_ids:
 full_image_file_name = '/tmp/' + os.path.basename(this_camera_id) + 'jpg'
 image_capture_tic = time.perf_counter()
 
 run_cmd = subprocess.run([
 '/usr/bin/ffmpeg', '-y', '-hide_banner',
 '-f', 'video4linux2',
 '-input_format', 'mjpeg',
 '-framerate', '30',
 '-i', this_camera_id,
 '-frames', '1',
 '-f', 'image2',
 full_image_file_name
 ],
 universal_newlines=True,
 stdout=subprocess.PIPE,
 stderr=subprocess.PIPE
 ) 
 print(run_cmd.stderr)
 image_capture_toc = time.perf_counter() 
 print(f"Captured {camera_id} image in: {image_capture_toc - image_capture_tic:0.0f} seconds")



ADDITIONAL DATA :
In response to an answer by Mark Setchell that said more information is needed to answer this question, I now elaborate the requested information here :


cameras : Cameras are USB-3 cameras that identify themselves as :


idVendor 0x0bda Realtek Semiconductor Corp.
idProduct 0x5829 



I tried to add the lengthy lsusb dump for one of the cameras but then this post exceeds the 30000 character limit


How the cameras are attached : USB 3 port of Pi to a master USB-3 7-port hub, with 3 spur 7 port hubs (not all ports in the spur hubs are occupied).


Camera resolution : HD Format 1920x1080


Why am I setting a frame rate if I only want 1 image ?


I set a frame rate which seems odd given that that specifies the time between frames, but you only want a single frame. I did that because I don't know how to get a single image from FFMPEG. This was the one example of FFMPEG command options that I found discussed on the web that I could get to capture a single image successfully. I've discovered innumerable sets of options that don't work ! I wrote this post because my web searches did not yield an example that works for me. I am hoping that someone much better informed than I am will show me a way that works !


Why am I scanning the cameras sequentially rather than in parallel ?


I did this just to keep things simple first and a loop over the list seemed easy and pythonic. It was clear to me that I might later be able to spawn a separate thread for each FFMPEG call, and maybe get a parallel speed up that way. Indeed, I would welcome an example of how to do that.


But in any case the single image capture taking 3 seconds seems way too long anyway.


Why am I only using a single 1 of the 4 cores on your Raspberry Pi ?


The sample code I posted is just a snippet from my entire program. Image capturing takes place in a child thread at present, while a Window GUI with an event loop is running in the main thread, so that user input isn't blocked during imaging.


I am not knowledgeable enough about the cores of the Raspberry Pi 400, nor about how the Raspberry Pi OS (aka Raspbian) manages allocation of threads to cores, nor whether Python can or should be explicitly directing threads to be running in specific cores.

I would welcome the suggestions of Mark Setchell (or anyone else knowledgeable about these issues) to recommend a best practice and include example code.