
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (37)
-
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...) -
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
Sur d’autres sites (6507)
-
Q&A : An interview with Matomo founder, Matthieu Aubry
-
Data Privacy Day 2020
27 janvier 2020, par Matthieu Aubry — Privacy -
Syncing 3 RTSP video streams in ffmpeg
26 septembre 2017, par Damon MariaI’m using an AXIS Q3708 camera. It internally has 3 sensors to create a 180º view. Each sensor puts out it’s own RTSP stream. To put together the full 180º view I need to pull an image from each stream and place the images side-by-side. Obviously it’s important that the 3 streams be synchronized so the 3 images were taken at the same ’wall clock’ time. For this reason I want to use ffmpeg because it should be a champ at this.
I intended to use the hstack filter to combine the 3 images. However, it’s causing me a lot of grief and errors.
What I’ve tried :
1. Hstack the rtsp streams :
ffmpeg -i "rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=1" -i "rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=2" -i "rtsp://root:root@192.168.13.104/axis-media/media.amp?camera=3" -filter_complex "[0:v][1:v][2:v]hstack=inputs=3[v]" -map "[v]" out.mp4
I get lots of RTSP dropped packets decoding errors which is strange given this is an i7-7700K 4.2GHz with an NVIDIA GTX 1080 Ti and 32GB of RAM and the camera is on a local gigabit network :
[rtsp @ 0xd4eca42a00] max delay reached. need to consume packetA dup=0 drop=136 speed=1.16x
[rtsp @ 0xd4eca42a00] RTP: missed 5 packets
[rtsp @ 0xd4eca42a00] max delay reached. need to consume packetA dup=0 drop=137 speed=1.15x
[rtsp @ 0xd4eca42a00] RTP: missed 4 packets
[h264 @ 0xd4ecbb3520] error while decoding MB 14 15, bytestream -21
[h264 @ 0xd4ecbb3520] concealing 1185 DC, 1185 AC, 1185 MV errors in P frame2. Using
ffmpeg -i %s -c:v copy -map 0:0 out.mp4
to save each stream to a file and then run the abovehstack
command with the 3 files rather than 3 RSTP streams. First off, there are no dropped packets saving the files, and the hstack runs at speed=25x so I don’t know why the operation in 1 had so many errors. But in the resulting video, some parts ’pause’ between frames as tho the same image was used across 2 frames for some of the hstack inputs, but not the others. Also, the ’scene’ at a set distance into the video lags behind the input videos – which would happen if the frames are being duplicated.3. If I use the RTSP streams as the input, and for the output specify
-f null -
(thenull
demuxer) then the demuxer reports a lot of these errors :[null @ 0x76afb1a060] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 1 >= 1
Last message repeated 1 times
[null @ 0x76afb1a060] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 2 >= 2
[null @ 0x76afb1a060] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 3 >= 3
[null @ 0x76afb1a060] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 4 >= 4
Last message repeated 1 times
[null @ 0x76afb1a060] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 5 >= 5
Last message repeated 1 timesWhich sounds again like frames are being duplicated.
4. If I add
-vsync cfr
then the null muxer no longer reportsnon monotonically increasing dts
, and dropped packet / decoding errors are reduced (but still there). Does this show that timing info from the RTSP streams is ’tripping up’ ffmpeg ? I presume it’s not a solution tho because it is essentially wiping out and replacing the timing information ffmpeg would need to use to sync ?5. Saving a single RTSP stream (re-encoding, not using the
copy
codec ornull
demuxer) logs a lot of warnings like :Past duration 0.999992 too large
Last message repeated 7 times
Past duration 0.999947 too large
Past duration 0.999992 too large6. I first tried performing this in code (using PyAV). But struck problems when pushing a frame from each container into the 3 hstack inputs would cause hstack to output multiple frames (when it should be only one). Again, this points to hstack duplicating frames.
7. I have used Wireshark to sniff the RTCP/RTP traffic and the RTCP Sender Report’s have correct NTP timestamps in them matched to the timestamps in the RTP streams.
8. Using ffprobe to show the frames of a RTSP stream (example below) I would have expected to see real (NTP based timestamps) given they exist in the RTCP packets. I’m not sure what is correct behaviour for ffprobe. But it does show that most frame timestamps are not exactly 0.25s (the camera is running 4 FPS) which might explain
-vsync cfr
’fixing’ some issues and thePast duration 0.999992
style errors :pkt_pts=1012502
pkt_pts_time=0:00:11.250022
pkt_dts=1012502
pkt_dts_time=0:00:11.250022
best_effort_timestamp=1012502
best_effort_timestamp_time=0:00:11.250022
pkt_duration=N/A
pkt_duration_time=N/A
pkt_pos=N/AI posted this as a possible hstack bug on the ffmpeg bug tracker but that discussion fizzled out.
So, question : how do I sync 3 RTSP video streams through hstack in ffmpeg ?