
Recherche avancée
Autres articles (44)
-
Qualité du média après traitement
21 juin 2013, parLe bon réglage du logiciel qui traite les média est important pour un équilibre entre les partis ( bande passante de l’hébergeur, qualité du média pour le rédacteur et le visiteur, accessibilité pour le visiteur ). Comment régler la qualité de son média ?
Plus la qualité du média est importante, plus la bande passante sera utilisée. Le visiteur avec une connexion internet à petit débit devra attendre plus longtemps. Inversement plus, la qualité du média est pauvre et donc le média devient dégradé voire (...) -
Pas question de marché, de cloud etc...
10 avril 2011Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
sur le web 2.0 et dans les entreprises qui en vivent.
Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...) -
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.
Sur d’autres sites (6300)
-
How to insert infinite audio into the live streaming by using ffmpeg ?
20 juillet 2017, par Xiang WangI use the following command to insert infinite audio into
rtsp
stream, but-stream_loop -1
didn’t work. How to fix it ?ffmpeg -i rtsp://admin:12345@192.168.31.104/h264/ch1/main/av_stream -stream_loop -1 -i blank.aac -vcodec libx264 -acodec aac -f flv rtmp://localhost/live/test
This is the full console when I executed the following command :
ffmpeg -i "rtsp://admin:admin@192.168.31.140:554/cam/realmonitor?channel=1&subtype=0" -f lavfi -i anullsrc -vcodec libx264 -acodec aac -f flv rtmp://live.lamp.mowainfo.com:11935/live/test
mowa@OpenWRT-Dev:~$ ffmpeg -i "rtsp://admin:admin@192.168.31.140:554/cam/realmonitor?channel=1&subtype=0" -f lavfi -i anullsrc -vcodec libx264 -acodec aac -f flv rtmp://live.lamp.mowainfo.com:11935/live/test
ffmpeg version N-86764-ga824685 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.4) 20160609
configuration: --enable-nonfree --enable-gpl --enable-libx264 --enable-zlib
libavutil 55. 67.100 / 55. 67.100
libavcodec 57.100.104 / 57.100.104
libavformat 57. 75.100 / 57. 75.100
libavdevice 57. 7.100 / 57. 7.100
libavfilter 6. 95.100 / 6. 95.100
libswscale 4. 7.101 / 4. 7.101
libswresample 2. 8.100 / 2. 8.100
libpostproc 54. 6.100 / 54. 6.100
Guessed Channel Layout for Input Stream #0.1 : mono
Input #0, rtsp, from 'rtsp://admin:admin@192.168.31.140:554/cam/realmonitor?channel=1&subtype=0':
Metadata:
title : RTSP Session/2.0
Duration: N/A, start: 0.000000, bitrate: N/A
Stream #0:0: Video: h264 (Main), yuv420p(progressive), 1280x720, 25.08 tbr, 90k tbn, 180k tbc
Stream #0:1: Audio: pcm_alaw, 8000 Hz, mono, s16, 64 kb/s
Input #1, lavfi, from 'anullsrc':
Duration: N/A, start: 0.000000, bitrate: 705 kb/s
Stream #1:0: Audio: pcm_u8, 44100 Hz, stereo, u8, 705 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
Stream #1:0 -> #0:1 (pcm_u8 (native) -> aac (native))
Press [q] to stop, [?] for help
[rtsp @ 0x23d0520] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
[libx264 @ 0x244d200] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 AVX2 LZCNT BMI2
[libx264 @ 0x244d200] profile High, level 3.1
[libx264 @ 0x244d200] 264 - core 148 r2643 5c65704 - H.264/MPEG-4 AVC codec - Copyleft 2003-2015 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, flv, to 'rtmp://live.lamp.mowainfo.com:11935/live/test':
Metadata:
title : RTSP Session/2.0
encoder : Lavf57.75.100
Stream #0:0: Video: h264 (libx264) ([7][0][0][0] / 0x0007), yuv420p, 1280x720, q=-1--1, 25.08 fps, 1k tbn, 25.08 tbc
Metadata:
encoder : Lavc57.100.104 libx264
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
Stream #0:1: Audio: aac (LC) ([10][0][0][0] / 0x000A), 44100 Hz, stereo, fltp, 128 kb/s
Metadata:
encoder : Lavc57.100.104 aac
Past duration 0.642738 too large 13286kB time=00:03:50.22 bitrate= 472.8kbits/s dup=0 drop=4 speed=0.985x -
ffmpeg concat error-unusual video
19 juillet 2017, par Samhita vempattiI have been trying to concatenate two 48 seconds video bits to one using the following command
ffmpeg -f concat -safe 0 -i C:\moviepy-master\concat.txt -c copy output.mp4
When I played it using Windows media player the first 48 second plays fine but the player closes before playing the second bit .
Then I tried to play it using VLC media player but in the second bit player but the audio and video are not in sync .
Then I also tried giving inputs separately so the following error shows up
C:\Users\SAMHITA VVNK>ffmpeg -f concat -safe 0 -i C:\moviepy-master\extract1.mp4
-i C:\moviepy-master\extract2.mp4 -c copy -flags +global_header output.mp4
ffmpeg version N-86723-g3b3501f Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 7.1.0 (GCC)
configuration: --enable-gpl --enable-version3 --enable-cuda --enable-cuvid --e
nable-d3d11va --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --
enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv
--enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-li
bfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug -
-enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enabl
e-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-li
bsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolam
e --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx
--enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable
-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-zlib
libavutil 55. 67.100 / 55. 67.100
libavcodec 57.100.103 / 57.100.103
libavformat 57. 75.100 / 57. 75.100
libavdevice 57. 7.100 / 57. 7.100
libavfilter 6. 94.100 / 6. 94.100
libswscale 4. 7.101 / 4. 7.101
libswresample 2. 8.100 / 2. 8.100
libpostproc 54. 6.100 / 54. 6.100
C:\moviepy-master\extract1.mp4: Invalid data found when processing inputThese are the input file’s specifications are as follows
C :\Users\SAMHITA VVNK>ffmpeg -i C :\moviepy-master\extract.mp4 -filter_complex "[
0:v]setpts=0.5*PTS[v] ;[0:a]atempo=2.0" -map "[a]" extract1fast.mp4C:\Users\SAMHITA VVNK>ffmpeg -i C:\moviepy-master\extract2.mp4 -i C:\moviepy-mas
ter\extract1.mp4
ffmpeg version N-86723-g3b3501f Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 7.1.0 (GCC)
configuration: --enable-gpl --enable-version3 --enable-cuda --enable-cuvid --e
nable-d3d11va --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --
enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv
--enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-li
bfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug -
-enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enabl
e-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-li
bsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolam
e --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx
--enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable
-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-zlib
libavutil 55. 67.100 / 55. 67.100
libavcodec 57.100.103 / 57.100.103
libavformat 57. 75.100 / 57. 75.100
libavdevice 57. 7.100 / 57. 7.100
libavfilter 6. 94.100 / 6. 94.100
libswscale 4. 7.101 / 4. 7.101
libswresample 2. 8.100 / 2. 8.100
libpostproc 54. 6.100 / 54. 6.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'C:\moviepy-master\extract2.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.56.101
Duration: 00:00:49.00, start: 0.012993, bitrate: 785 kb/s
Stream #0:0(eng): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yu
v420p(tv, smpte170m/smpte170m/bt709), 640x360, 668 kb/s, 29.97 fps, 29.97 tbr, 1
1988 tbn, 23976 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp,
137 kb/s (default)
Metadata:
handler_name : SoundHandler
Input #1, mov,mp4,m4a,3gp,3g2,mj2, from 'C:\moviepy-master\extract1.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf57.56.101
Duration: 00:00:49.00, start: 0.012993, bitrate: 526 kb/s
Stream #1:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 640x360 [
SAR 1:1 DAR 16:9], 462 kb/s, 24.86 fps, 24.86 tbr, 19888 tbn, 49.72 tbc (default
)
Metadata:
handler_name : VideoHandler
Stream #1:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, flt
p, 95 kb/s (default)
Metadata:
handler_name : SoundHandler -
Overlay Image on moving object in Video (Argumented Reality / OpenCv)
30 juillet 2017, par Karandeep AtwalI am using
FFmpeg
to overlay image/emoji on video by this command -"-i "+inputfilePath+" -filter_complex "+"[0][1]overlay=enable='between(t,"+startTime+","+endTime+")'[v1]"+" -map [v0] -map 0:a "+OutputfilePath;
But above command only overlay image over video and stays still.
In Instagram and Snapchat there is New pin feature. I want exactly same ,eg
blur
on moving faces or as in below videos -Is it possible via
FFmpeg
?I think someone with OPENCV or Argumented Reality knowledge can help in this. It is quiet similar to AR as we need to move/zoom emoji exactly where we want to on video/live cam.