
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (38)
-
Submit enhancements and plugins
13 avril 2011If you have developed a new extension to add one or more useful features to MediaSPIP, let us know and its integration into the core MedisSPIP functionality will be considered.
You can use the development discussion list to request for help with creating a plugin. As MediaSPIP is based on SPIP - or you can use the SPIP discussion list SPIP-Zone. -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Script d’installation automatique de MediaSPIP
25 avril 2011, parAfin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
La documentation de l’utilisation du script d’installation (...)
Sur d’autres sites (7071)
-
How can I make a GStreamer pipeline to read individual frames and publish stream ?
30 janvier 2024, par Alvan RahimliI have an external system which sends individual H264 encoded frames one by one via socket. What I'm trying to do is getting these frames and publishing an RTSP stream to RTSP server that I have.


After getting frames (which is just reading TCP socket in chunks) my current approach is like this :


I read frames, then start a process with following command, and then write every frame to STDIN of the process.


gst-launch-1.0 -e fdsrc fd=0 ! 
 h264parse ! 
 avdec_h264 ! 
 videoconvert ! 
 videorate ! 
 video/x-raw,framerate=25/1 ! 
 avimux ! 
 filesink location=gsvideo3.avi



I know that it writes stream to AVI file, but this is closest I was able to get to a normal video. And it is probably very inefficient and full of redundant pipeline steps.


I am also open to FFMPEG commands, but GStreamer is preferred as I will be able to embed it to my C# project via bindings and keep stuff in-process.


Any help is appreciated, thanks in advance !


-
Crop individual frames of a video and then concat for output
27 juin 2024, par Ashish PadaveI want to Crop individual frames of a video and then concat for output. This works with 2 ffmpeg commands. The first one extracts each frame and the second concats them.
I want to get it done without the intermediate frames.


Tried with the following


ffmpeg -y -i input.mp4 -filter_complex "[0:v]split=890[v0][v1][v2][v3][v4][v5];[v0]select='eq(n\,0)',setpts=PTS-STARTPTS,crop=404:720:225:0[v0]; [v1]select='eq(n\,1)',setpts=PTS-STARTPTS,crop=404:720:225:0[v1]; [v2]select='eq(n\,2)',setpts=PTS-STARTPTS,crop=404:720:225:0[v2]; [v3]select='eq(n\,3)',setpts=PTS-STARTPTS,crop=404:720:225:0[v3]; [v4]select='eq(n\,4)',setpts=PTS-STARTPTS,crop=404:720:225:0[v4]; [v5]select='eq(n\,5)',setpts=PTS-STARTPTS,crop=404:720:225:0[v5];[v0][v1][v2][v3][v4][v5]concat=n=6:v=1:a=0[outv]" -map "[outv]" -map 0:a? -c:a copy -vsync 2 output.mp4



The above is an abridged version of the command. The video I am working with has 890 frames and a frame rate of 25.


The output log with 890 frames is


Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf59.5.100
 Duration: 00:00:35.61, start: 0.000000, bitrate: 3380 kb/s
 Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1280x720, 3246 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
 Metadata:
 handler_name : VideoHandler
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
 Metadata:
 handler_name : SoundHandler
Stream mapping:
 Stream #0:0 (h264) -> split
 concat -> Stream #0:0 (libx264)
 Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
[libx264 @ 0x55963ad4b700] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2 AVX512
[libx264 @ 0x55963ad4b700] profile High, level 3.0
[libx264 @ 0x55963ad4b700] 264 - core 155 r2917 0a84d98 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'output.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.29.100
 Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 404x720, q=-1--1, 25 fps, 12800 tbn, 25 tbc (default)
 Metadata:
 encoder : Lavc58.54.100 libx264
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
 Metadata:
 handler_name : SoundHandler
frame= 3 fps=0.6 q=-1.0 Lsize= 586kB time=00:00:35.59 bitrate= 134.9kbits/s dup=0 drop=887 speed=6.73x
video:21kB audio:558kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.278277%
[libx264 @ 0x55963ad4b700] frame I:1 Avg QP:23.09 size: 17546
[libx264 @ 0x55963ad4b700] frame P:1 Avg QP:25.73 size: 2069
[libx264 @ 0x55963ad4b700] frame B:1 Avg QP:26.34 size: 845
[libx264 @ 0x55963ad4b700] consecutive B-frames: 33.3% 66.7% 0.0% 0.0%
[libx264 @ 0x55963ad4b700] mb I I16..4: 8.4% 68.9% 22.7%
[libx264 @ 0x55963ad4b700] mb P I16..4: 0.1% 0.6% 0.3% P16..4: 28.5% 5.0% 3.9% 0.0% 0.0% skip:61.6%
[libx264 @ 0x55963ad4b700] mb B I16..4: 0.0% 0.1% 0.0% B16..8: 29.6% 3.4% 0.3% direct: 0.2% skip:66.4% L0:59.3% L1:35.6% BI: 5.1%
[libx264 @ 0x55963ad4b700] 8x8 transform intra:68.8% inter:73.3%
[libx264 @ 0x55963ad4b700] coded y,uvDC,uvAC intra: 73.0% 80.1% 36.4% inter: 3.9% 7.0% 0.7%
[libx264 @ 0x55963ad4b700] i16 v,h,dc,p: 2% 74% 6% 18%
[libx264 @ 0x55963ad4b700] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 13% 36% 20% 8% 4% 2% 6% 4% 8%
[libx264 @ 0x55963ad4b700] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 20% 41% 10% 4% 7% 4% 5% 3% 5%
[libx264 @ 0x55963ad4b700] i8c dc,h,v,p: 44% 37% 14% 6%
[libx264 @ 0x55963ad4b700] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0x55963ad4b700] kb/s:1364.00



So it is basically dropping 887 frames. The output file has the full audio but no video.
Is this even possible ?


-
FFmpeg individual image zoom-in transition
19 juillet 2024, par The SomebodyI'm stuck with an ffmpeg command that needs to generate a video. Originally, I had an ffmpeg command that would loop through images and change after a certain amount of time. Now, to improve the video a little bit, I wish to add a zoom in effect on each image.


I am facing an issue that when I generate the video, the images no longer change. That is, it is constantly first image. I can the zoom effect replay as it should (for every changing image), but the images do not actually change (it is always the first picture). Any advice/suggestion would be appreciated. I am sure, it will be something wrong with the syntax, but now I am at a loss..


Original code, that works fine (without the zoom in) :


ffmpeg -n -loop 1 -t 26.01 -i "\image_0.png" -loop 1 -t 26.01 -i "\image_1.png"
 -loop 1 -t 26.01 -i "\image_2.png" -loop 1 -t 26.01 -i "\image_3.png" 
 -i "/speech.mp3" 
 -filter_complex "[0:v]scale=1080x1920,setpts=PTS-STARTPTS[v0]; 
 [1:v]scale=1080x1920,setpts=PTS-STARTPTS[v1]; 
 [2:v]scale=1080x1920,setpts=PTS-STARTPTS[v2]; 
 [3:v]scale=1080x1920,setpts=PTS-STARTPTS[v3]; 
 [v0][v1][v2][v3]concat=n=4:v=1:a=0,subtitles='"C\:/transcription.ass"'[v]" -map "[v]" -map 4:a -c:v libx264 -c:a aac -b:a 192k
 -shortest "C:\Videos/Video.mp4" -loglevel verbose



Code that does not work as intended :


ffmpeg -n -loop 1 -t 26.01 -i "\image_0.png" -loop 1 -t 26.01 -i "\image_1.png" -loop 1 -t 26.01 -i "\image_2.png" -loop 1 -t 26.01 -i "\image_3.png" -i "/speech.mp3" -filter_complex "[0:v]scale=1080x1920,zoompan=z='zoom+0.001':d=650:s=1080x1920:x=iw/2-(iw/zoom/2):y=ih/2-(ih/zoom/2),setpts=PTS-STARTPTS[v0]; [1:v]scale=1080x1920,zoompan=z='zoom+0.001':d=650:s=1080x1920:x=iw/2-(iw/zoom/2):y=ih/2-(ih/zoom/2),setpts=PTS-STARTPTS[v1]; [2:v]scale=1080x1920,zoompan=z='zoom+0.001':d=650:s=1080x1920:x=iw/2-(iw/zoom/2):y=ih/2-(ih/zoom/2),setpts=PTS-STARTPTS[v2]; [1:v]scale=1080x1920,zoompan='z=zoom+0.001':d=650:s=1080x1920:x=iw/2-(iw/zoom/2):y=ih/2-(ih/zoom/2),setpts=PTS-STARTPTS[v3]; [v0][v1][v2][v3]concat=n=4:v=1:a=0,subtitles='C\:/transcription.ass'[v]" -map "[v]" -map 4:a -c:v libx264 -c:a aac -b:a 192k -shortest "C:\Videos/Video.mp4" -loglevel verbose



I tried to change and see if it really is just the first image that's used through out the video and confirmed it. Also tried to play around with the symbols '"', ' ' ', etc.