
Recherche avancée
Médias (3)
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
Autres articles (26)
-
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.
Sur d’autres sites (8302)
-
FFMPEG xstack not recognizing inputs
12 août 2020, par JoshI'm trying to arrange three input videos into a single output video using ffmpeg's xstack. I currently have the operations working with a vstack followed by an hstack, but would like to combine them into an xstack for performance.


I've tried copying the syntax from multiple locations such as :




Vertically or horizontally stack (mosaic) several videos using ffmpeg ?


My command is as follows :




C :\ffmpeg\bin\ffmpeg.exe -i states_full.mp4 -i title.mp4 -i graphs.mp4" -filter_complex "[0:v] setpts=PTS-STARTPTS, scale=qvga [a0] ; [1:v] setpts=PTS-STARTPTS, scale=qvga [a1] ; [2:v] setpts=PTS-STARTPTS, scale=qvga [a2] ; [a0][a1][a2]xstack=inputs=3:layout=0_0|w0_0|w0_h0[out] " -map "[out]" -c:v libx264 -t '30' -f matroska output.mp4




The command always errors out at the same spot, with the same error message :




'w0_0' is not recognized as an internal or external command,
operable program or batch file.




Some odd behavior is that even when I change the layout section to :


layout=w0_0|0_0|w0_h0



The error message is still on the middle '0_0' meaning it may be an error in formatting.


This issue is very strange, as the vstack and hstack still work, only the xstack fails.


-
ffmpeg concat .dv without errors or loss of audio sync
29 mars 2022, par Dave LangI'm ripping video from a bunch of ancient MiniDV tapes using, after much trial and error, some almost as ancient Mac hardware and iMovie HD 6.0.5. This is working well except that it will only create a contiguous video clip of about 12.6 GB in size. If the total video is larger than that, it creates a second clip that is usually about 500 MB.


I want to join these two clips in the "best" way possible - meaning with ffmpeg throwing as few errors as possible, and the audio / video staying in sync.


I'm currently using the following command line in a bash shell :


for f in *.dv ; do echo file '$f' >> list.txt ; done && ffmpeg -f concat -safe 0 -i list.txt -c copy stitched-video.dv && rm list.txt


This seems to be working well, and using the 'eyeball' check, sync seems to be preserved.


However, I do get the following error message when ffmpeg starts in on the second file :


Non-monotonous DTS in output stream 0:1 ; previous : 107844491, current : 107843736 ; changing to 107844492. This may result in incorrect timestamps in the output file.


Since I know just enough about ffmpeg to be dangerous, I don't understand the significance of this message.


Can anyone suggest changes to my ffmpeg command that will fix whatever ffmpeg is telling me is going wrong ?


I'm going to be working on HD MiniDV tapes next, and, because they suffer from numerous dropouts, my task is going to become more complex, so I'd like to nail this one.


Thanks !


as suggested below ffprobe for the two files


Input #0, dv, from 'file1.dv' : Metadata : timecode : 00:00:00 ;22 Duration : 00:59:54.79, start : 0.000000, bitrate : 28771 kb/s Stream #0:0 : Video : dvvideo, yuv411p, 720x480 [SAR 8:9 DAR 4:3], 25000 kb/s, 29.97 fps, 29.97 tbr, 29.97 tbn Stream #0:1 : Audio : pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/s


Input #0, dv, from 'file2.dv' : Metadata : timecode : 00:15:06 ;19 Duration : 00:02:04.09, start : 0.000000, bitrate : 28771 kb/s Stream #0:0 : Video : dvvideo, yuv411p, 720x480 [SAR 8:9 DAR 4:3], 25000 kb/s, 29.97 fps, 29.97 tbr, 29.97 tbn Stream #0:1 : Audio : pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/s


-
Android Encode h264 using libavcodec for ARGB
12 décembre 2013, par nmxprimeI have a stream of buffer content which actually contains 480x800 sized ARGB image[byte array of size 480*800*4]. i want to encode around 10,000s of similar images into a stream of h.264 at specified fps(12). this shows how to encode images into encoded video,but requires input to be yuv420.
Now i have ARGB images, i want to encode into CODEC_ID_H264
How to convert RGB from YUV420p for ffmpeg encoder ? shows how to do it for rgb24, but how to do it for rgb32,meaning ARGB image datahow do i use libavcodec for this ?
EDIT : i found How to convert RGB from YUV420p for ffmpeg encoder ?
But i don't understand.From the 1st link, i come to know that AVFrame struct contains data[0],data1,data[2] which are filled with Y, U & V values.
In 2nd link, they showed how to use sws_scale to convert RGB24 to YUV420 as such
SwsContext * ctx = sws_getContext(imgWidth, imgHeight,
AV_PIX_FMT_RGB24, imgWidth, imgHeight,
AV_PIX_FMT_YUV420P, 0, 0, 0, 0);
uint8_t * inData[1] = { rgb24Data }; // RGB24 have one plane
int inLinesize[1] = { 3*imgWidth }; // RGB stride
sws_scale(ctx, inData, inLinesize, 0, imgHeight, dst_picture.data, dst_picture.linesize)Here i assume that rgb24Data is the buffer containing RGB24 image bytes.
So how i use this information for ARGB, which is 32 bit ? Do i need manually to strip-off the alpha channel or any other work around ?
Thank you