
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (22)
-
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...) -
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)
Sur d’autres sites (3320)
-
what FFMPEG performance settings to use for processing videos for the web
19 juin 2015, par eAbiI have a few questions regarding usage of ffmpeg for processing videos for the web. I’m a beginner so please bear with me (although I read some docs on the internet)
Performance
- First of all, given the fact that FFMPEG utilizes all cores at 100%, what is the actual parallelism efficiency ?
Let’s assume the following scenario. I have a video (fullHD, doesn’t matter what encoders / compression format was used to obtain the video) and I want to resize (downscale) to various sizes (e.g. 240px, 480px and 720px height) using mp4 format (thus using libx264 with aac codecs).
Using ffmpeg, I see that all of my laptop’s cores (8) are used at 100% and I was wondering what scenarios can improve the overall performance of the whole processing task. So this leads us to basically 2 scenarios : Assuming the video mentioned above as input, for obtaining the 3 output videos (@ 240px, 480px and 720px height sizes), we :
- Process input video and obtain 1 output video at a time, and let all the cores work at the same time at 100% ;
- Process the video to obtain all output videos in parallel, by bounding each output video to a single processor core which’ll work at 100% ;
So the question is actually reduced to the parallelism efficiency of the ffmpeg program.
This means that letting ffmpeg process the task
procVideo
- which takes 1 input video to produce 1 single output video (transcoding/downscaling and so on) - on N processor cores doesn’t mean it finish the task N times faster than letting it run the same task bound to a single core. So if the efficiency is smaller than 100%, it’s better to have NprocVideo
tasks in parallel, each bound to a single core, rather than doing the task sequentially for each output video.Codecs
Other than the above performance problem, the usage of codecs bugs me. I am trying to obtain mp4 videos because of the wide implementation of the format in html5 browsers.
So having a video as input in any format, I want to convert it to mp4. So I’m using libx264 codec with aac.
- Use libx264, x264 or h264 for video encoding/decoding ?
- Use libfdk_aac, libaacplus or aac for audio encoding/decoding to aac ?
Also, I would like to know what are the licesing fees for each of the above codec, as the online resources on these are quite limited / hard to understand.
If anyone could shed some light on those questions, I would really be grateful ! Thanks for your time !
-
Is there a way to use ffmpeg audio filters to automatically synchronize 2 streams with similar content
29 mai 2015, par user3741412I have a situation where I have a video capture of HD content via HDMI with audio from a sound board that goes through a impedance drop into a microphone input of a camcorder. That same signal is split at line level to a ’line in’ jack on the same computer that is capturing the HDMI. Alternatively I can capture the audio via USB from the soundboard which is probably the best plan, but carries with it the same issue.
The point is that the line in or usb capture will be much higher quality than the one on HDMI because the line out -> impedance change -> mic in path generates inferior quality in that simply brushing the mic jack on the camera while trying to change the zoom (close proximity) can cause noise on the recording.
So I can do this today :
- Take the good sound and the camera captured sound and load each into
audacity and pretty quickly use the timeshift toot to perfectly fit
the good audio to the questionable audio from the HDMI capture and
cut the good audio to the exact size of the video. Then I can use
ffmpeg or other video editing software to replace the questionable
audio with the better audio.
But while somewhat quick and easy, it always carries with it a bit of human error and time. I’d like to automate this if possible as this process is repeated at least weekly throughout the year.
Does anyone have a suggestion if any of these ideas have merit or could suggest another approach ?
-
I suspect but have yet to confirm that the system timestamp of the start time may be recorded in both audio captured with something like Audacity, or the USB capture tool from the sound board as well as the HDMI mpeg-2 video. I tried ffprobe on a couple audacity captured .wav files but didn’t see anything in the results about such a time code, but perhaps other audio formats or other probing tools may include this info. Can anyone advise if this is common with any particular capture tools or file formats ?
- if so, I think I could get best results by extracting this information and then using simple adelay and atrim filters in ffmpeg to sync reliably directly from the two sources in one ffmpeg call. This is all theoretical for me right now— I’ve never tried either of these filters yet— just trying to optimize against blind alleys by asking for advice up front.
-
If such timestamps are not embedded, possibly I can use the file system timestamp for the same idea expressed in 1a, but I suspect the file open of the two capture tools may have different inherant delays. Possibly these delays will be found to be nearly constant and the approach can work with a built-in constant anticipation delay but sounds messy and less reliable than idea 1. Still, I’d take it, if it turns out reasonably reliable
-
Are there any ffmpeg or general digital audio experts out there that know of particular filters that can be employed on the actual data to look for similarities like normalizing the peak amplitudes or normalizing the amplification of the two to some RMS value and then stepping through a short 10 second snippet of audio, moving one time stream .01s left against the other repeatedly and subtracting the two and looking for a minimum ? Sounds like it could take a while, but if it could do this in less than a minute and be reliable, I suspect it could work. But I have only rudimentary knowledge of audio streams and perhaps what I suggest is just not plausible— but since each stream starts with the same source I think there should be a chance. I am just way out of my depth as to how to go down this road, so if someone out there knows such magic or can throw me some names of filters and example calls, I can explore if I can make it work.
-
any hardware level suggestions to take a line level output down to a mic level input and not have the problems I am seeing using a simple in-line impedance drop module, so that I can simply rely on the audio from the HDMI ?
Thanks in advance for any pointers or suggestinons !
- Take the good sound and the camera captured sound and load each into
-
avformat/mxfenc : Write Mastering Display Colour Volume to MXF
9 septembre 2020, par Harry Mallon