
Recherche avancée
Autres articles (64)
-
Utilisation et configuration du script
19 janvier 2011, parInformations spécifiques à la distribution Debian
Si vous utilisez cette distribution, vous devrez activer les dépôts "debian-multimedia" comme expliqué ici :
Depuis la version 0.3.1 du script, le dépôt peut être automatiquement activé à la suite d’une question.
Récupération du script
Le script d’installation peut être récupéré de deux manières différentes.
Via svn en utilisant la commande pour récupérer le code source à jour :
svn co (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
(Dés)Activation de fonctionnalités (plugins)
18 février 2011, parPour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)
Sur d’autres sites (6248)
-
ffmpeg - audio sync issue
27 août 2020, par TuxaFirst, I decode a video into single frames :


ffmpeg -vsync 0 -i video.mp4 -y frames/%d.png -f mkvtimestamp_v2 pts.txt


Then, those frames are processed and overwritten using another application and an mp4 video is created again. Note that the original video is reused to copy the audio stream.


ffmpeg -vsync 0 -i frames/%d.png -vsync 0 -i video.mp4 -map 0 -map 1:a output.mp4


At the end mp4fpsmod is used to transfer the pts to get the exact same frame timestamps :


mp4fpsmod output.mp4 -t pts.txt -o output_fixed.mp4


This works fine for most videos.


However, there are some videos that are fine but after processing, audio and video are slightly out of sync. It seems this is related to those warnings when decoding the video :




[mkvtimestamp_v2 @ 0x55f1000e7ec0] Non-monotonous DTS in output stream 1:0 ; previous : 4400, current : 4400 ; changing to 4401. This may result in incorrect timestamps in the output file.






[image2 @ 0x55f1000ee040] Application provided invalid, non monotonically increasing dts to muxer in stream 0 : 132 >= 132




Why does this happen ? Can this be fixed somehow ?


-
ffmpeg create screenshots full of artefacts and completly illegible
17 septembre 2020, par HekimenI am using ffmpeg to create 15 screenshots from video with simple command


ffmpeg -ss 10:00 -y -i 'video.mp4' -f mjpeg -vframes 1 -an 'image.jpg'



command is executed 15x in row with different -ss time. But sometimes, completly randomly, images - all 15, even each of them is created by own process, are basicaly generated full of artefacts and completly illegible :



I believe is not problem with video, as they are almost always different - codecs, bitrate, lenght, quality, resolution, etc. I was trying search similar problem, but only same problem i found was when images was created from real-time stream over UDP (problem with UDP transfer protocol), which is not my case as ffmpeg and videos are installed and stored on same HDD (centos OS). Otuput from ffmpeg command is also without any error. I am also unable reproduce this behavior, when i run process for screenshots again, all screenshots are created properly. My only suspicion is server load, screenshots are created on encoding server which have average load around 50% on CPU :

Is it possible that when the CPU is fully loaded, ffmpeg can create broken images ?


-
what is the faster way to load a local image using javascript and / or nodejs and faster way to getImageData ?
4 octobre 2020, par Tom LecozI'm working on a video-editing-tool online for a large audience.
Users can create some "scenes" with multiple images, videos, text and sound , add a transition between 2 scenes, add some special effects, etc...


When the users are happy with what they made, they can download the result as a mp4 file with a desired resolution and framerate. Let's say full-hd-60fps for example (it can be bigger).


I'm using nodejs & ffmpeg to build the mp4 from HtmlCanvasElement.
Because it's impossible to seek perfectly frame-by-frame with a HtmlVideoElement, I start to convert the videos from each "scene" in a sequence of png using ffmpeg.
Then, I read my scene frame by frame and , if there are some videos, I replace the videoElements by an image containing the right frame. Once every images are loaded, I launch the capture and go to the next frame.


Everythings works as expected but it's too slow !
Even with a powerfull computer (ryzen 3900X, rtx 2080 super, 32 gb of ram , nvme 970 evo plus) , in the best case, I can capture basic full-hd movie (if it contains videos inside) at 40 FPS.


It may sounds good enought but it's not.
Our company produce thousands of mp4 every day.
A slow encoding process means more servers at works so it will be more expensive for us.


Until now, my company used (and is still using) a tool based on Adobe Flash because the whole video-editing-tool was made with Flash. I was (and am) in charge to translate the whole thing into HTML. I reproduced every feature one by one during 4 years (it's by far my biggest project) and this is the very last step but even if the html-version of our player works very well, the encoding process is much slower than the flash version - able to encode full-hd at 90-100FPS - )


I put console.log everywhere in order to find what makes the encoding so slow and there are 2 bottlenecks :


As I said before, for each frame, if there are videos on the current scene, I replace video-elements by images representing the right frame at the right time. Since I'm using local files, I expected a loading time almost synchronous. It's not the case at all, it required more than 10 ms in most cases.


So my first question is "what is the fastest way to handle local image loading with javascript used as final output ?".


I don't care about the technology involved, I have no preference, I just want to be able to load my local image faster than what I get for now.


The second bottleneck is weird and to be honest I don't understand what's happening here.


When the current frame is ready to be captured, I need to get it's data using CanvasRenderingContext2D.getImageData in order to send it to ffmpeg and this particular step is very slow.


This single line


let imageData = canvas.getContext("2d").getImageData(0,0,1920,1080); 



takes something like 12-13 ms.
It's very slow !


So I'm also searching another way to extract the pixels-data from my canvas.


Few days ago, I found an alternative to getImageData using the new class called VideoFrame that has been created to be used with the classes VideoEncoder & VideoDecoder that will come in Chrome 86.
You can do something like that


let buffers:Uint8Array[] = [];
createImageBitmap(canvas).then((bmp)=>{
 let videoFrame = new VideoFrame(bmp);
 for(let i = 0;i<3;i++){
 buffers[i] = new Uint8Array(videoFrame.planes[id].length);
 videoFrame.planes[id].readInto(buffers[i])
 }
})



It allow me to grab the pixel data around 25% quickly than getImageData but as you can see, I don't get a single RGBA buffer but 3 weirds buffers matching with I420 format.


In an ideal way, I would like to send it directly to ffmpeg but I don't know how to deals with these 3 buffers (i have no experience with I420 format) .


I'm not sure at all the solution that involve VideoFrame is a good one. If you know a faster way to transfer the data from a canvas to ffmpeg, please tell me.


Thanks for reading this very long post.
Any help would be very appreciated