
Recherche avancée
Autres articles (52)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
Sur d’autres sites (3800)
-
Specify timestamp in ffmpeg video segment command
25 mars 2019, par SoumyaI have a continuous RTSP stream coming from a camera over the network.
I want to dump the stream but in video files of length 1 min each.I an using the following command
ffmpeg -i "rtsp://user:pass@example.com" -f mp4 -r 12 -s 640x480 -ar 44100 \
-ac 1 -segment_time 60 -segment_format mp4 "out%03d.mp4"The name of the files being created are of the form
out001.mp4
,out002.mp4
, etc.I want to include the timestamp (hour and minute) in the name of the file segments eg.
09-30.mp4
,09-31.mp4
, etc.If it is mandatory to provide a serial number for the segment, is it possible to get something like
09-30-001.mp4
,09-31-002
.mp4 ? -
rtsp to rtmp using ffmpeg or any tool wrapper
27 août 2017, par ChakriI have a requirement where I need to restream the RTSP stream from camera source to RTMP server. I know this may sound a repeated question but my exact scenario is I cannot do it manually over command line with ffmpeg command. I need a wrapper where I receive the rtsp and rtmp url from external source say through REST invocation. Then the code can trigger the ffmpeg restream.
Basically flow is like this :
- Camera source application sends RTSP read event(could be basic HTTP(REST) request with RTSP url, metadata about camera info, serial no etc) to my streamer app
Ex : /usr/bin/ffmpeg -i rtsp ://10.144.11.22:554/stream1 -f flv rtmp ://10.13.11.121:1935/stream1
-
Streamer app computes the RTMP server url for corresponding camera and triggers a ffmpeg command to stream RTSP to RTMP
-
Streamer app triggers above(2) in separate thread and keeps reading the logs for monitoring purpose. Also identifies the end of RTSP stream and sends an update(Example : RTSP END) event to UI
Now at point(2) I need a suggestion. Here I need a stable wrapper/api which can help. I tried this through some Java wrappers but the process hangs or fails to read the output from ffmpeg. Also I need to handle streams from many cameras where spawning thread for each one could be exhaustive.
So I am looking for some similar api/wrapper in C++ or Go Lang which might have more closer interaction in handling ffmpeg command.
Please point if similar issue is addressed elsewhere
-
converting video with avconv while capturing
31 mai 2017, par OliverI am successfully capturing a video stream with that device :
Easycap DC60But I need to run the following commands in serial :
First that :
sudo killall -9 somagic-both
sudo killall -9 somagic-capture
sudo somagic-init
sudo somagic-both -n 1>.video 2>.audio
[CTRL+C]
Then that :
avconv
-f rawvideo -pix_fmt uyvy422 -r 30 -s:v 720x480 -i .video
-f s16le -sample_rate 24000 -ac 2 -i .audio -strict experimental
-vcodec mpeg4 -vtag xvid -qscale:v 7
-vf yadif -s:v 720x540
video.avi
The problem is that when I try to run them together in a row (with "&" at the end of "sudo somagic-both -n 1>.video 2>.audio") I get that error :
ffmpeg version 2.8.11-0ubuntu0.16.04.1 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.4) 20160609
configuration: --prefix=/usr --extra-version=0ubuntu0.16.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-libx264 --enable-libopencv
libavutil 54. 31.100 / 54. 31.100
libavcodec 56. 60.100 / 56. 60.100
libavformat 56. 40.101 / 56. 40.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 40.101 / 5. 40.101
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 2.101 / 1. 2.101
libpostproc 53. 3.100 / 53. 3.100
Input #0, rawvideo, from '.video':
Duration: N/A, bitrate: 165888 kb/s
Stream #0:0: Video: rawvideo (UYVY / 0x59565955), uyvy422, 720x480, 165888 kb/s, 30 tbr, 30 tbn, 30 tbc
[s16le @ 0x24d7480] Estimating duration from bitrate, this may be inaccurate
Guessed Channel Layout for Input Stream #1.0 : stereo
Input #1, s16le, from '.audio':
Duration: 00:00:00.71, bitrate: 767 kb/s
Stream #1:0: Audio: pcm_s16le, 24000 Hz, 2 channels, s16, 768 kb/s
File 'video.avi' already exists. Overwrite ? [y/N] y
Output #0, avi, to 'video.avi':
Metadata:
ISFT : Lavf56.40.101
Stream #0:0: Video: mpeg4 (xvid / 0x64697678), yuv420p, 720x540, q=2-31, 200 kb/s, 30 fps, 30 tbn, 30 tbc
Metadata:
encoder : Lavc56.60.100 mpeg4
Stream #0:1: Audio: mp3 (libmp3lame) (U[0][0][0] / 0x0055), 24000 Hz, stereo, s16p
Metadata:
encoder : Lavc56.60.100 libmp3lame
Stream mapping:
Stream #0:0 -> #0:0 (rawvideo (native) -> mpeg4 (native))
Stream #1:0 -> #0:1 (pcm_s16le (native) -> mp3 (libmp3lame))
Press [q] to stop, [?] for help
Multiple frames in a packet from stream 0
[pcm_s16le @ 0x24f33c0] Invalid PCM packet, data has size 2 but at least a size of 4 was expected
Error while decoding stream #1:0: Invalid data found when processing input
frame= 0 fps=0.0 q=0.0 Lsize= 38kB time=00:00:03.19 bitrate= 98.0kbits/s
video:0kB audio:25kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 50.802467%I think it happens because it is converting more quickly than it is capturing. I tried to use the solution given here
without success.Anyone ever faced that situation before ?