
Recherche avancée
Autres articles (106)
-
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Pas question de marché, de cloud etc...
10 avril 2011Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
sur le web 2.0 et dans les entreprises qui en vivent.
Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...)
Sur d’autres sites (7275)
-
keepalive type and frequency in ffmpeg [on hold]
19 novembre 2013, par Jack SimthMy company has a bunch of IP cameras that we distribute - specifically Grandstream - and the manufacturer has changed their firmware. The normal keepalive that ffmpeg uses for the rtsp streams ( either ff_rtsp_send_cmd_async(s, "GET_PARAMETER", rt->control_uri, NULL) ; or ff_rtsp_send_cmd_async(s, "OPTIONS", "*", NULL) ; both in in libavformat/rtspdec.c) is no longer working, for two reasons :
1) The new Grandstream firmware is now checking for a receiver report to determine whether or not the program reading the stream is live, not just anything.
2) The new Grandstream firmware requires that the receiver report to keep the connection alive happen at least once every 25 seconds, and on the audio stream it is currently only happening about every 30 seconds or so (video is getting it every 7 seconds or so).
So after about a minute with ffmpeg connected, the camera stops sending the audio stream, the audio stream on ffmpeg reads end-of-file, and then ffmpeg shuts everything down.
As I can't change the firmware, I'm trying to dig through the ffmpeg code to make it send the appropriate receiver report for the keep alive... but I am getting nowhere. I've added a little snippet of code into the receiver reports so I know when they're running when I call ffmpeg on debug, but... well, it's not going well.
Test command :
ffmpeg -loglevel debug -i rtsp ://admin:admin@192.168.4.3:554/0 -acodec libmp3lame -ar 22050 -vcodec copy -y -f flv /dev/null &> test.txtTest output :
`[root@localhost ffmpeg]# cat test.txt
ffmpeg version 2.0 Copyright (c) 2000-2013 the FFmpeg developers
built on Aug 21 2013 14:24:28 with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-3)
configuration: --datadir=/usr/share/ffmpeg --bindir=/usr/local/bin --libdir=/usr/local/lib --incdir=/usr/local/include --shlibdir=/usr/lib --mandir=/usr/share/man --disable-avisynth --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic -fasynchronous-unwind-tables' --enable-avfilter --enable-libx264 --enable-gpl --enable-version3 --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-vdpau --enable-x11grab --enable-librtmp --enable-libopencore-amrnb --enable-libopencore-amrwb --disable-static --enable-libgsm --enable-libxvid --enable-libvpx --enable-libvorbis --enable-libvo-aacenc --enable-libmp3lame
libavutil 52. 38.100 / 52. 38.100
libavcodec 55. 18.102 / 55. 18.102
libavformat 55. 12.100 / 55. 12.100
libavdevice 55. 3.100 / 55. 3.100
libavfilter 3. 79.101 / 3. 79.101
libswscale 2. 3.100 / 2. 3.100
libswresample 0. 17.102 / 0. 17.102
libpostproc 52. 3.100 / 52. 3.100
Splitting the commandline.
Reading option '-loglevel' ... matched as option 'loglevel' (set logging level) with argument 'debug'.
Reading option '-i' ... matched as input file with argument 'rtsp://admin:admin@192.168.4.3:554/0'.
Reading option '-acodec' ... matched as option 'acodec' (force audio codec ('copy' to copy stream)) with argument 'libmp3lame'.
Reading option '-ar' ... matched as option 'ar' (set audio sampling rate (in Hz)) with argument '22050'.
Reading option '-vcodec' ... matched as option 'vcodec' (force video codec ('copy' to copy stream)) with argument 'copy'.
Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'flv'.
Reading option '/dev/null' ... matched as output file.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option loglevel (set logging level) with argument debug.
Applying option y (overwrite output files) with argument 1.
Successfully parsed a group of options.
Parsing a group of options: input file rtsp://admin:admin@192.168.4.3:554/0.
Successfully parsed a group of options.
Opening an input file: rtsp://admin:admin@192.168.4.3:554/0.
[rtsp @ 0x9d9ccc0] SDP:
v=0
o=StreamingServer 3331435948 1116907222000 IN IP4 192.168.4.3
s=h264.mp4
c=IN IP4 0.0.0.0
t=0 0
a=control:*
m=video 0 RTP/AVP 96
a=control:trackID=0
a=rtpmap:96 H264/90000
a=fmtp:96 packetization-mode=1; sprop-parameter-sets=Z0LgHtoCgPRA,aM4wpIA=
m=audio 0 RTP/AVP 0
a=control:trackID=1
a=rtpmap:0 PCMU/8000
a=ptime:20
m=application 0 RTP/AVP 107
a=control:trackID=2
a=rtpmap:107 vnd.onvif.metadata/90000
[rtsp @ 0x9d9ccc0] video codec set to: h264
[NULL @ 0x9d9f400] RTP Packetization Mode: 1
[NULL @ 0x9d9f400] Extradata set to 0x9d9f900 (size: 22)!
[rtsp @ 0x9d9ccc0] audio codec set to: pcm_mulaw
[rtsp @ 0x9d9ccc0] audio samplerate set to: 8000
[rtsp @ 0x9d9ccc0] audio channels set to: 1
[rtsp @ 0x9d9ccc0] hello state=0
[h264 @ 0x9d9f400] Current profile doesn't provide more RBSP data in PPS, skipping
Last message repeated 1 times
[rtsp @ 0x9d9ccc0] All info found
Guessed Channel Layout for Input Stream #0.1 : mono
Input #0, rtsp, from 'rtsp://admin:admin@192.168.4.3:554/0':
Metadata:
title : h264.mp4
Duration: N/A, start: 0.000000, bitrate: 64 kb/s
Stream #0:0, 28, 1/90000: Video: h264 (Constrained Baseline), yuv420p, 640x480, 1/180000, 10 tbr, 90k tbn, 180k tbc
Stream #0:1, 156, 1/8000: Audio: pcm_mulaw, 8000 Hz, mono, s16, 64 kb/s
Successfully opened the file.
Parsing a group of options: output file /dev/null.
Applying option acodec (force audio codec ('copy' to copy stream)) with argument libmp3lame.
Applying option ar (set audio sampling rate (in Hz)) with argument 22050.
Applying option vcodec (force video codec ('copy' to copy stream)) with argument copy.
Applying option f (force format) with argument flv.
Successfully parsed a group of options.
Opening an output file: /dev/null.
Successfully opened the file.
detected 2 logical cores
[graph 0 input from stream 0:1 @ 0x9f15380] Setting 'time_base' to value '1/8000'
[graph 0 input from stream 0:1 @ 0x9f15380] Setting 'sample_rate' to value '8000'
[graph 0 input from stream 0:1 @ 0x9f15380] Setting 'sample_fmt' to value 's16'
[graph 0 input from stream 0:1 @ 0x9f15380] Setting 'channel_layout' to value '0x4'
[graph 0 input from stream 0:1 @ 0x9f15380] tb:1/8000 samplefmt:s16 samplerate:8000 chlayout:0x4
[audio format for output stream 0:1 @ 0x9efa7c0] Setting 'sample_fmts' to value 's32p|fltp|s16p'
[audio format for output stream 0:1 @ 0x9efa7c0] Setting 'sample_rates' to value '22050'
[audio format for output stream 0:1 @ 0x9efa7c0] Setting 'channel_layouts' to value '0x4|0x3'
[audio format for output stream 0:1 @ 0x9efa7c0] auto-inserting filter 'auto-inserted resampler 0' between the filter 'Parsed_anull_0' and the filter 'audio format for output stream 0:1'
[AVFilterGraph @ 0x9f15980] query_formats: 4 queried, 9 merged, 3 already done, 0 delayed
[auto-inserted resampler 0 @ 0x9dfada0] ch:1 chl:mono fmt:s16 r:8000Hz -> ch:1 chl:mono fmt:s16p r:22050Hz
Output #0, flv, to '/dev/null':
Metadata:
title : h264.mp4
encoder : Lavf55.12.100
Stream #0:0, 0, 1/1000: Video: h264 ([7][0][0][0] / 0x0007), yuv420p, 640x480, 1/90000, q=2-31, 1k tbn, 90k tbc
Stream #0:1, 0, 1/1000: Audio: mp3 (libmp3lame) ([2][0][0][0] / 0x0002), 22050 Hz, mono, s16p
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #0:1 -> #0:1 (pcm_mulaw -> libmp3lame)
Press [q] to stop, [?] for help
Current profile doesn't provide more RBSP data in PPS, skippingrate= 135.4kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 134.4kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 135.0kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 135.5kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 136.9kbits/s
Queue input is backward in time= 233kB time=00:00:13.69 bitrate= 139.4kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 136.3kbits/s
[flv @ 0x9de1200] Non-monotonous DTS in output stream 0:1; previous: 14239, current: 13926; changing to 14239. This may result in incorrect timestamps in the output file.
[flv @ 0x9de1200] Non-monotonous DTS in output stream 0:1; previous: 14239, current: 13952; changing to 14239. This may result in incorrect timestamps in the output file.
[flv @ 0x9de1200] Non-monotonous DTS in output stream 0:1; previous: 14239, current: 13979; changing to 14239. This may result in incorrect timestamps in the output file.
[flv @ 0x9de1200] Non-monotonous DTS in output stream 0:1; previous: 14239, current: 14005; changing to 14239. This may result in incorrect timestamps in the output file.
[flv @ 0x9de1200] Non-monotonous DTS in output stream 0:1; previous: 14239, current: 14031; changing to 14239. This may result in incorrect timestamps in the output file.
[flv @ 0x9de1200] Non-monotonous DTS in output stream 0:1; previous: 14239, current: 14057; changing to 14239. This may result in incorrect timestamps in the output file.
[flv @ 0x9de1200] Non-monotonous DTS in output stream 0:1; previous: 14239, current: 14083; changing to 14239. This may result in incorrect timestamps in the output file.
[flv @ 0x9de1200] Non-monotonous DTS in output stream 0:1; previous: 14239, current: 14109; changing to 14239. This may result in incorrect timestamps in the output file.
[flv @ 0x9de1200] Non-monotonous DTS in output stream 0:1; previous: 14239, current: 14135; changing to 14239. This may result in incorrect timestamps in the output file.
[flv @ 0x9de1200] Non-monotonous DTS in output stream 0:1; previous: 14239, current: 14161; changing to 14239. This may result in incorrect timestamps in the output file.
[flv @ 0x9de1200] Non-monotonous DTS in output stream 0:1; previous: 14239, current: 14188; changing to 14239. This may result in incorrect timestamps in the output file.
[flv @ 0x9de1200] Non-monotonous DTS in output stream 0:1; previous: 14239, current: 14214; changing to 14239. This may result in incorrect timestamps in the output file.
Current profile doesn't provide more RBSP data in PPS, skippingrate= 141.5kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 142.0kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 142.5kbits/s
Receiver Report delay: 469789, gettime: -1527669086, last_recep: 322446, timebase: -1534837492
Current profile doesn't provide more RBSP data in PPS, skippingrate= 141.5kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 141.7kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 141.1kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 140.6kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 140.7kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 139.9kbits/s
Receiver Report delay: 132993, gettime: -1516538925, last_recep: 322446, timebase: -1518568234
Current profile doesn't provide more RBSP data in PPS, skippingrate= 139.6kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 139.6kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 139.7kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 139.4kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 140.0kbits/s
Receiver Report delay: 897727, gettime: -1504870331, last_recep: 322446, timebase: -1518568552
[NULL @ 0x9d9f400] Current profile doesn't provide more RBSP data in PPS, skipping
Current profile doesn't provide more RBSP data in PPS, skippingrate= 139.4kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 139.1kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 139.0kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 139.0kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 138.6kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 138.5kbits/s
Current profile doesn't provide more RBSP data in PPS, skippingrate= 138.2kbits/s
EOF on sink link output stream 0:1:default.time=00:00:58.40 bitrate= 139.6kbits/s
No more output streams to write to, finishing.
[libmp3lame @ 0x9dfa580] Trying to remove 344 more samples than there are in the queue
frame= 589 fps= 11 q=-1.0 Lsize= 1003kB time=00:00:58.85 bitrate= 139.5kbits/s
video:724kB audio:231kB subtitle:0 global headers:0kB muxing overhead 4.955356%
2959 frames successfully decoded, 0 decoding errors
[AVIOContext @ 0x9e021c0] Statistics: 3 seeks, 2860 writeouts
[root@localhost ffmpeg]# -
FFMPEG - Scale video filter not providing expected results
11 novembre 2011, par dpasseraApologies if this question has been asked. I couldn't find it, but if it has, please let me know and I'll close this out.
I'm attempting a simple scale of a video whose original dimensions are 480x360 and whose target dimensions are 400x300. The video starts as an FLV and eventually needs to end up as an MPEG. I'm using the following command line to do this :
ffmpeg -i user.flv -vf "scale=400:300" user_scaled.mpg
When I play the scaled video in MPEG Streamclip, the scale is correct and the video info shows that the dimensions are 400x300. However, when I play the scaled video in Quicktime, the video is scaled to 478x359. More importantly, FFMPEG, itself, treats the video as being 478x359, so any future commands (trimming, conversion, overlaying, etc) executed on it result in a video of 478x359.
The initial workflow required an FLV to MPEG conversion, but I've tried this with several different in and out formats (FLV -> FLV, FLV -> MPEG, MPEG -> MPEG, etc) all with the same results. As long as I can end up with an MPEG, though, I can deal with however many steps and conversions it would take to get this scaling working.
I'll paste the command-line output below, and a sample input video is also linked below, if you'd like it. Thank you very much for any help.
http://www.monkeydriver.com/dpassera/stack_flv.zip
Command-line output :
ffmpeg -i user.flv -vf "scale=400:300" user_scaled.mpg
ffmpeg version 0.7-rc1, Copyright (c) 2000-2011 the FFmpeg developers
built on May 21 2011 22:13:19 with gcc 4.1.2 20080704 (Red Hat 4.1.2-50)
configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64
--mandir=/usr/share/man --incdir=/usr/include --disable-avisynth
--extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
-fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC'
--enable-avfilter --enable-libdirac --enable-libgsm --enable-libmp3lame
--enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libx264
--enable-gpl --enable-postproc --enable-pthreads --enable-shared
--enable-swscale --enable-vdpau --enable-version3 --enable-x11grab
--disable-yasm --enable-filters --enable-filter=movie
libavutil 50. 40. 1 / 50. 40. 1
libavcodec 52.120. 0 / 52.120. 0
libavformat 52.108. 0 / 52.108. 0
libavdevice 52. 4. 0 / 52. 4. 0
libavfilter 1. 77. 0 / 1. 77. 0
libswscale 0. 13. 0 / 0. 13. 0
libpostproc 51. 2. 0 / 51. 2. 0
[flv @ 0x11dd3b30] Estimating duration from bitrate, this may be inaccurate
Input #0, flv, from 'user.flv':
Metadata:
duration : 5
videocodecid : 2
audiocodecid : 6
canSeekToEnd : true
createdby : FMS 4.0
creationdate : Mon Oct 31 11:43:44 2011
Duration: 00:00:04.62, start: 0.000000, bitrate: N/A
Stream #0.0: Video: flv, yuv420p, 640x480, 1k tbr, 1k tbn, 1k tbc
Stream #0.1: Audio: nellymoser, 44100 Hz, mono, s16
[buffer @ 0x11ddc950] w:640 h:480 pixfmt:yuv420p
[scale @ 0x11dda610] w:640 h:480 fmt:yuv420p -> w:400 h:300 fmt:yuv420p flags:0xa0000004
[mpeg @ 0x11dd6bd0] VBV buffer size not set, muxing may fail
Output #0, mpeg, to 'user_scaled.mpg':
Metadata:
duration : 5
videocodecid : 2
audiocodecid : 6
canSeekToEnd : true
createdby : FMS 4.0
creationdate : Mon Oct 31 11:43:44 2011
encoder : Lavf52.108.0
Stream #0.0: Video: mpeg1video, yuv420p, 400x300, q=2-31, 200 kb/s, 90k tbn, 60 tbc
Stream #0.1: Audio: mp2, 44100 Hz, mono, s16, 64 kb/s
Stream mapping:
Stream #0.0 -> #0.0
Stream #0.1 -> #0.1
Press [q] to stop encoding
frame= 230 fps= 0 q=10.2 size= 366kB time=3.82 bitrate= 785.6kbits/s dup=175 drop=0
frame= 267 fps= 0 q=10.7 Lsize= 412kB time=4.43 bitrate= 761.3kbits/s dup=203 drop=0
video:370kB audio:36kB global headers:0kB muxing overhead 1.568959% -
Parsing the STDERR output of node.js child_process line by line
3 janvier 2012, par primerI'm writing a simple online conversion tool using FFMPEG and Node.js. I'm trying to figure out how to parse each line of the conversion output received from FFMPEG and only display pertinent results client side in the browser. In my case I want the encoding time counter that FFMPEG spits out on the command line.
My function thus far is :
function metric(ffmpeg, res) {
ffmpeg.stdout.on('data', function(data) {
res.writeHead(200, {'content-type': 'text/html'});
res.write('received upload:\n\n');
console.log(data);
});
ffmpeg.stderr.on('data', function (data) {
var temp += data.toString();
var lines = temp.split('\n');
//for debugging purposes
for(var i = 0;icode>What this ends up returning is multiple arrays, each of which includes the data from the previous array as well as the next data chunk. For example, the function returns array 1 :0=>A, 1=>B, array 2 :0=>A, 1=>B, 2=>C, array 3 :0=>A, 1=>B, 2=>C, 3=>D, and so on.
I'm quite new to Node so I'm probably missing something simple. Any guidance would be much appreciated !