
Recherche avancée
Autres articles (76)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
Les images
15 mai 2013 -
Taille des images et des logos définissables
9 février 2011, parDans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...)
Sur d’autres sites (9206)
-
Movie making from lyrics with timestamps in python
26 mai 2020, par carlI have lyrics from musixmatch with timestamps. I want to form video with the lyric lines in the video along with images in a file numbered from 1-n.



As seen in this post, I thought ffmpeg would be something which can help me but there aren't much information that i can find.



Also the answer given by @llogan, gives very vague idea of forming videos with "subtitle filter" in ffmpeg.



It would be very helpful you can provide an example to explain your idea.(It can also help other coders at any time :) )



Thanks in advance


-
Why the source order matters on working with multiple media sources in a single AVFormatContext ?
24 avril 2024, par Mehmet YILMAZavformat_open_input()
deletes theAVFormatContext*
and returns-6
when the source order changes.

I am trying to open multiple media sources dynamically with different(mixed) formats and codecs in a single context (
AVFormatContext
).

My media sources are a BlackMagic DeckLink Duo SDI input as first source and an
mp4 file
orrtsp stream
as second.

When I order to open (
avformat_open_input()
) the source 2 (RTSP or MP4 file) at first and then open the BlackMagic DeckLink Duo, proceed as expected.

But when I change the order, and first open the DeckLink and then try to open RTSP stream or MP4 file, as I inspected in the step debugger ;
AVFormatContext*
deleting in theav_open_input()
function and it returns-6
as result.

Please find the simple error reproduction code snappet below ;


AVFormatContext* context{avformat_alloc_context()};
const char* url_source1{"DeckLink Duo (1)"};
const AVInputFormat* format_source1{av_find_input_format("decklink")};

const char* url_source2{"http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4"};

// Open the first media input
int result = avformat_open_input(&context, url_source1, format_source1, NULL);

if(result < 0) {
 exit(1);
}

// Open the second media input
// This function in current order deletes the context and returns -6
result = avformat_open_input(&context, url_source2, NULL, NULL);
if(result < 0) {
 exit(1);
}

// Since the context has been deleted in previous step, segmentation fault accours here!
result = avformat_find_stream_info(context, NULL);
if(result < 0) {
 exit(1);
}

std::cout << "Total number of streams: " << context->nb_streams << std::endl;




But When I change the order and call the
avformat_open_input()
first for themp4
file and then theDeckLink
device as following it proceed as expected, no error.

AVFormatContext* context{avformat_alloc_context()};
const char* url_source1{"DeckLink Duo (1)"};
const AVInputFormat* format_source1{av_find_input_format("decklink")};

const char* url_source2{"http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4"};


// Open the second media input
int result = avformat_open_input(&context, url_source2, NULL, NULL);
if(result < 0) {
 exit(1);
}


// Open the first media input
result = avformat_open_input(&context, url_source1, format_source1, NULL);

if(result < 0) {
 exit(1);
}


result = avformat_find_stream_info(context, NULL);
if(result < 0) {
 exit(1);
}

std::cout << "Total number of streams: " << context->nb_streams << std::endl;




-
Adding slides of a pdf presentation to the audio file of the same presentation with ffmpeg
27 mars 2024, par revherI want to add slides of a pdf presentation to the audio file of the same presentation with ffmpeg (or whatever).


I split the pdf presentations using pdftoppm into numbered png images and tried to use ffmpeg to sync the audio file with the images. But I am wondering if the output file could be smaller.
Example, the following command will split file 'foo.pdf' into file 'output-01.png', 'output-02.png' etc :


$ pdftoppm foo.pdf -png output



Then I listen to the audio file 'foo-audio.mp3' and noted the end of each slide in minutes and second. Using a spreadsheet you have to calculate the exact (in seconds) timing of each slide, which can be expressed as 40 for 40 seconds for the program of the conference or 00:01:37 for 1 minute and 37 seconds for your first slide etc and paste it into a text file 'foo-images.txt' which will look like :


file ./program.png
duration 40 
file ./output-01.png
duration 00:01:37



Then the video can be built with the command :


$ ffmpeg -safe 0 -f concat -i foo-images.txt -fps_mode vfr -pix_fmt yuv420p output.mp4
 height not divisible by 2 (1500x1125)type here



but I got the above error that the height was not divisible by 2.
Then I found (on a the ffmpeg mailing list) that I could use the '-vf scale=1680 :-2' option and the following command was successful :


$ ffmpeg -safe 0 -f concat -i foo-images.txt -vf scale=1680:-2 -pix_fmt yuv420p output.mp4




and in order to sync the audio with the images and get the final video 'foo-video.mp4' :


$ ffmpeg -safe 0 -f concat -i foo-images.txt -i foo-audio.mp3 -vf scale=1680:-2 -pix_fmt yuv420p foo-video.mp4 




It worked, but the video is about 60Mb for 30 slides and 30 minutes presentation. Is is a correct ratio ? Although the video can be viewed by an Apple viewer. But is there a benefit to use ffmpeg and command lines compared to the use of softwares like imovie or acrobat to get the same result ?
Also I am confused with the scale 1680 (which is fine) and the 420p of pix_fmt.