
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (67)
-
Organiser par catégorie
17 mai 2013, parDans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...) -
Récupération d’informations sur le site maître à l’installation d’une instance
26 novembre 2010, parUtilité
Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...) -
Le plugin : Podcasts.
14 juillet 2010, parLe problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
Types de fichiers supportés dans les flux
Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)
Sur d’autres sites (2560)
-
Is there any liability regarding ffmpeg ? [closed]
28 novembre 2023, par kenan bahadirMy question is as follows : I want to upload a short video to YouTube. I have the code below, but it does not work as I want. Can you help me ?
crop_command = f'ffmpeg -i "input_file" -filter_complex "crop=1080:1080:420:0,scale=1920:1920:flags=lanczos" -acodec aac -strict experimental "temp_output_file"'
os.system(crop_command)


My question is as follows : I want to upload a short video to YouTube. I have the code below, but it does not work as I want.
crop_command = f'ffmpeg -i "input_file" -filter_complex "crop=1080:1080:420:0,scale=1920:1920:flags=lanczos" -acodec aac -strict experimental "temp_output_file"'
os.system(crop_command)


-
Ffmpeg error "avcodec_send_frame" return "invalid argument"
17 octobre 2023, par Paulo CoutinhoI have a problem in function
avcodec_send_frame
throwing errorError sending frame for encoding: Invalid argument
(-22). I already search, check, recheck and nothing. It is near the ffmpeg examples. Can anyone help me ? Thanks.

This is my code :


static void callbackAddSubtitle(const Message &m, const Response r)
{
 try
 {
 av_log_set_level(AV_LOG_DEBUG);

 spdlog::debug("[Mapping :: callbackAddSubtitle] Adding subtitle...");

 auto inputOpt = m.get("input");
 auto outputOpt = m.get("output");

 if (!inputOpt.has_value() || !outputOpt.has_value())
 {
 r(std::string{"INVALID-PARAMS"});
 return;
 }

 const std::string &input = inputOpt.value();
 const std::string &output = outputOpt.value();

 // initialize input
 spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing input video...");

 AVFormatContext *inputFormatCtx = avformat_alloc_context();
 if (avformat_open_input(&inputFormatCtx, input.c_str(), nullptr, nullptr) != 0)
 {
 spdlog::error("Failed to open input");
 r(std::string{"ERROR-OPEN-INPUT"});
 return;
 }

 if (avformat_find_stream_info(inputFormatCtx, nullptr) < 0)
 {
 spdlog::error("Failed to find stream information");
 avformat_close_input(&inputFormatCtx);
 r(std::string{"ERROR-FIND-STREAM"});
 return;
 }

 int videoStreamIndex = av_find_best_stream(inputFormatCtx, AVMEDIA_TYPE_VIDEO, -1, -1, nullptr, 0);
 if (videoStreamIndex < 0)
 {
 spdlog::error("Could not find a video stream");
 r(std::string{"ERROR-FIND-VIDEO-STREAM"});
 return;
 }

 AVRational timeBase = inputFormatCtx->streams[videoStreamIndex]->time_base;

 AVCodecParameters *inputCodecPar = inputFormatCtx->streams[videoStreamIndex]->codecpar;
 const AVCodec *inputCodec = avcodec_find_decoder(inputCodecPar->codec_id);
 AVCodecContext *inputCodecCtx = avcodec_alloc_context3(inputCodec);

 avcodec_parameters_to_context(inputCodecCtx, inputCodecPar);
 avcodec_open2(inputCodecCtx, inputCodec, nullptr);

 // initialize input audio
 spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing input audio...");

 int audioStreamIndex = av_find_best_stream(inputFormatCtx, AVMEDIA_TYPE_AUDIO, -1, -1, nullptr, 0);
 if (audioStreamIndex < 0)
 {
 spdlog::error("Could not find an audio stream");
 r(std::string{"ERROR-FIND-AUDIO-STREAM"});
 return;
 }

 AVCodecParameters *inputAudioCodecPar = inputFormatCtx->streams[audioStreamIndex]->codecpar;
 const AVCodec *inputAudioCodec = avcodec_find_decoder(inputAudioCodecPar->codec_id);
 AVCodecContext *inputAudioCodecCtx = avcodec_alloc_context3(inputAudioCodec);

 avcodec_parameters_to_context(inputAudioCodecCtx, inputAudioCodecPar);
 avcodec_open2(inputAudioCodecCtx, inputAudioCodec, nullptr);

 // initialize output video
 spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing output video...");

 AVFormatContext *outputFormatCtx = nullptr;
 avformat_alloc_output_context2(&outputFormatCtx, nullptr, nullptr, output.c_str());
 AVStream *outputStream = avformat_new_stream(outputFormatCtx, nullptr);

 AVCodecContext *outputCodecCtx = avcodec_alloc_context3(inputCodec);
 avcodec_parameters_to_context(outputCodecCtx, inputCodecPar);
 int retOutVideo = avcodec_open2(outputCodecCtx, inputCodec, nullptr);

 if (retOutVideo < 0)
 {
 char err[AV_ERROR_MAX_STRING_SIZE];
 av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retOutVideo);
 spdlog::error("Failed to initialize output video: {}", err);
 r(std::string{"ERROR-INIT-OUTPUT-VIDEO"});
 return;
 }

 outputStream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
 outputStream->codecpar->codec_id = inputCodec->id;
 avcodec_parameters_from_context(outputStream->codecpar, outputCodecCtx);

 if (!(outputFormatCtx->oformat->flags & AVFMT_NOFILE))
 {
 avio_open(&outputFormatCtx->pb, output.c_str(), AVIO_FLAG_WRITE);
 }

 const char *pixelFormatName = getPixelFormatName(outputCodecCtx->pix_fmt);
 spdlog::debug("Pixel Format: {}", pixelFormatName);

 // initialize output audio
 spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing output audio...");

 AVStream *outputAudioStream = avformat_new_stream(outputFormatCtx, nullptr);
 AVCodecContext *outputAudioCodecCtx = avcodec_alloc_context3(inputAudioCodec);
 avcodec_parameters_to_context(outputAudioCodecCtx, inputAudioCodecPar);
 int retOutAudio = avcodec_open2(outputAudioCodecCtx, inputAudioCodec, nullptr);

 if (retOutAudio < 0)
 {
 char err[AV_ERROR_MAX_STRING_SIZE];
 av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retOutAudio);
 spdlog::error("Failed to initialize output audio: {}", err);
 r(std::string{"ERROR-INIT-OUTPUT-AUDIO"});
 return;
 }

 outputAudioStream->codecpar->codec_type = AVMEDIA_TYPE_AUDIO;
 outputAudioStream->codecpar->codec_id = inputAudioCodec->id;
 avcodec_parameters_from_context(outputAudioStream->codecpar, outputAudioCodecCtx);

 // initialize filters
 spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing filters...");

 AVFilterGraph *filterGraph = avfilter_graph_alloc();
 if (!filterGraph)
 {
 spdlog::error("Failed to allocate filter graph");
 r(std::string{"ERROR-FILTER-GRAPH"});
 return;
 }

 AVFilterContext *bufferSinkCtx;
 AVFilterContext *bufferSrcCtx;

 const AVFilter *bufferSink = avfilter_get_by_name("buffersink");
 const AVFilter *bufferSrc = avfilter_get_by_name("buffer");

 // input filter
 char filterInArgs[512];
 snprintf(filterInArgs, sizeof(filterInArgs), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d", inputCodecPar->width, inputCodecPar->height, inputCodecCtx->pix_fmt, timeBase.num, timeBase.den, inputCodecCtx->sample_aspect_ratio.num, inputCodecCtx->sample_aspect_ratio.den);

 spdlog::debug("[Mapping :: callbackAddSubtitle] Buffer src args: {}", filterInArgs);

 int retFilterIn = avfilter_graph_create_filter(&bufferSrcCtx, bufferSrc, "in", filterInArgs, nullptr, filterGraph);
 if (retFilterIn < 0)
 {
 char err[AV_ERROR_MAX_STRING_SIZE];
 av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retFilterIn);
 spdlog::error("Failed to create bufferSrcCtx: {}", err);
 r(std::string{"ERROR-CREATE-FILTER-SRC"});
 return;
 }

 // output filter
 int retFilterOut = avfilter_graph_create_filter(&bufferSinkCtx, bufferSink, "out", nullptr, nullptr, filterGraph);

 if (retFilterOut < 0)
 {
 char err[AV_ERROR_MAX_STRING_SIZE];
 av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retFilterOut);
 spdlog::error("Failed to create bufferSinkCtx: {}", err);
 r(std::string{"ERROR-CREATE-FILTER-SINK"});
 return;
 }

 enum AVPixelFormat pix_fmts[] = {AV_PIX_FMT_YUV420P, AV_PIX_FMT_NONE};
 av_opt_set_int_list(bufferSinkCtx, "pix_fmts", pix_fmts, AV_PIX_FMT_NONE, AV_OPT_SEARCH_CHILDREN);

 // add filters to graph and link them
 const char *filterSpec = "drawtext=text='Legenda Adicionada Automaticamente Via FFMPEG e C++': fontcolor=yellow: bordercolor=black: fontfile='/Users/paulo/Downloads/roboto/Roboto-Black.ttf'";
 const AVFilter *filter = avfilter_get_by_name("drawtext");

 AVFilterInOut *outputs = avfilter_inout_alloc();
 AVFilterInOut *inputs = avfilter_inout_alloc();

 outputs->name = av_strdup("in");
 outputs->filter_ctx = bufferSrcCtx;
 outputs->pad_idx = 0;
 outputs->next = nullptr;
 inputs->name = av_strdup("out");
 inputs->filter_ctx = bufferSinkCtx;
 inputs->pad_idx = 0;
 inputs->next = nullptr;

 if (avfilter_graph_parse_ptr(filterGraph, filterSpec, &inputs, &outputs, nullptr) < 0)
 {
 spdlog::error("Failed to parse filter graph");
 r(std::string{"ERROR-PARSE-FILTER"});
 return;
 }

 if (avfilter_graph_config(filterGraph, nullptr) < 0)
 {
 spdlog::error("Failed to configure filter graph");
 r(std::string{"ERROR-CONFIG-FILTER"});
 return;
 }

 // header
 spdlog::debug("[Mapping :: callbackAddSubtitle] Writing header...");

 if (avformat_write_header(outputFormatCtx, nullptr) < 0)
 {
 spdlog::error("Error writing header");
 r(std::string{"ERROR-WRITE-HEADER"});
 return;
 }

 // read frames and write to output
 AVPacket *packet = av_packet_alloc();
 AVFrame *frame = av_frame_alloc();

 frame->format = inputCodecCtx->pix_fmt;
 frame->width = inputCodecCtx->width;
 frame->height = inputCodecCtx->height;

 AVFrame *filt_frame = av_frame_alloc();

 filt_frame->format = inputCodecCtx->pix_fmt;
 filt_frame->width = inputCodecCtx->width;
 filt_frame->height = inputCodecCtx->height;

 while (av_read_frame(inputFormatCtx, packet) >= 0)
 {
 if (packet->stream_index == videoStreamIndex)
 {
 if (avcodec_send_packet(inputCodecCtx, packet) < 0)
 {
 spdlog::error("Error sending packet for decoding");
 r(std::string{"ERROR-SEND-PACKET-DECODE"});
 return;
 }

 while (avcodec_receive_frame(inputCodecCtx, frame) == 0)
 {
 // Envia o quadro decodificado para o gráfico de filtro
 if (av_buffersrc_add_frame_flags(bufferSrcCtx, frame, AV_BUFFERSRC_FLAG_KEEP_REF) < 0)
 {
 spdlog::error("Error while feeding the filtergraph");
 r(std::string{"ERROR-FEED-FILTERGRAPH"});
 return;
 }

 // Recebe um quadro do gráfico de filtro
 if (av_buffersink_get_frame(bufferSinkCtx, filt_frame) < 0)
 {
 spdlog::error("Error while receiving the filtered frame");
 r(std::string{"ERROR-RECEIVE-FILTERED-FRAME"});
 return;
 }

 // Envia o quadro decodificado para re-codificação
 int retSendFrame = avcodec_send_frame(outputCodecCtx, filt_frame);
 if (retSendFrame < 0)
 {
 char err[AV_ERROR_MAX_STRING_SIZE];
 av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retSendFrame);
 spdlog::error("Error sending frame for encoding: {}", err);
 r(std::string{"ERROR-SEND-FRAME-ENCODE"});
 return;
 }

 AVPacket *output_packet = av_packet_alloc();
 output_packet->data = nullptr;
 output_packet->size = 0;

 // Re-codifica filt_frame para um pacote
 if (avcodec_receive_packet(outputCodecCtx, output_packet) == 0)
 {
 // Escreve o pacote no fluxo de saída
 av_write_frame(outputFormatCtx, output_packet);
 av_packet_unref(output_packet);
 }

 av_frame_unref(filt_frame);
 }

 // time
 packet->pts = av_rescale_q_rnd(packet->pts, inputFormatCtx->streams[videoStreamIndex]->time_base, outputFormatCtx->streams[videoStreamIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
 packet->dts = av_rescale_q_rnd(packet->dts, inputFormatCtx->streams[videoStreamIndex]->time_base, outputFormatCtx->streams[videoStreamIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
 packet->duration = av_rescale_q(packet->duration, inputFormatCtx->streams[videoStreamIndex]->time_base, outputFormatCtx->streams[videoStreamIndex]->time_base);
 packet->stream_index = videoStreamIndex;

 // write packet to output video stream
 av_interleaved_write_frame(outputFormatCtx, packet);
 }
 else if (packet->stream_index == audioStreamIndex)
 {
 // rescale timestamps
 packet->pts = av_rescale_q_rnd(packet->pts, inputFormatCtx->streams[audioStreamIndex]->time_base, outputFormatCtx->streams[audioStreamIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
 packet->dts = av_rescale_q_rnd(packet->dts, inputFormatCtx->streams[audioStreamIndex]->time_base, outputFormatCtx->streams[audioStreamIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
 packet->duration = av_rescale_q(packet->duration, inputFormatCtx->streams[audioStreamIndex]->time_base, outputFormatCtx->streams[audioStreamIndex]->time_base);
 packet->stream_index = audioStreamIndex;

 // write packet to output audio stream
 av_interleaved_write_frame(outputFormatCtx, packet);
 }

 av_packet_unref(packet);
 }

 av_packet_free(&packet);
 av_frame_free(&frame);
 av_frame_free(&filt_frame);

 spdlog::debug("[Mapping :: callbackAddSubtitle] Writing trailer...");

 if (av_write_trailer(outputFormatCtx) < 0)
 {
 spdlog::error("Error writing trailer");
 r(std::string{"ERROR-WRITE-TRAILER"});
 return;
 }

 // cleanup
 spdlog::debug("[Mapping :: callbackAddSubtitle] Cleaning...");

 if (!(outputFormatCtx->oformat->flags & AVFMT_NOFILE))
 {
 avio_closep(&outputFormatCtx->pb);
 }

 avcodec_free_context(&inputCodecCtx);
 avcodec_free_context(&inputAudioCodecCtx);
 avcodec_free_context(&outputCodecCtx);
 avcodec_free_context(&outputAudioCodecCtx);

 avformat_free_context(inputFormatCtx);
 avformat_free_context(outputFormatCtx);

 r(std::string{"OK"});
 }
 catch (const std::exception &e)
 {
 spdlog::error("Error: {}", e.what());
 r(std::string{"ERROR"});
 }
}



The error is :


[2023-10-17 06:30:16.936] [debug] [Mapping :: callbackAddSubtitle] Adding subtitle...
[2023-10-17 06:30:16.936] [debug] [Mapping :: callbackAddSubtitle] Initializing input video...
[NULL @ 0x153604a60] Opening '/Users/paulo/Downloads/movie.mp4' for reading
[file @ 0x6000001fd170] Setting default whitelist 'file,crypto,data'
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Format mov,mp4,m4a,3gp,3g2,mj2 probed with size=2048 and score=100
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] ISO: File Type Major Brand: isom
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Unknown dref type 0x206c7275 size 12
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Processing st: 0, edit list 0 - media time: 0, duration: 2669670
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Unknown dref type 0x206c7275 size 12
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Processing st: 1, edit list 0 - media time: 1024, duration: 4272096
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] drop a frame at curr_cts: 0 @ 0
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Before avformat_find_stream_info() pos: 113542488 bytes read:110788 seeks:1 nb_streams:2
[h264 @ 0x153604cd0] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x153604cd0] Decoding VUI
[h264 @ 0x153604cd0] nal_unit_type: 8(PPS), nal_ref_idc: 3
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] demuxer injecting skip 1024 / discard 0
[aac @ 0x1536056f0] skip 1024 / discard 0 samples due to side data
[h264 @ 0x153604cd0] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x153604cd0] Decoding VUI
[h264 @ 0x153604cd0] nal_unit_type: 8(PPS), nal_ref_idc: 3
[h264 @ 0x153604cd0] nal_unit_type: 6(SEI), nal_ref_idc: 0
[h264 @ 0x153604cd0] nal_unit_type: 5(IDR), nal_ref_idc: 3
[h264 @ 0x153604cd0] Format yuv420p chosen by get_format().
[h264 @ 0x153604cd0] Reinit context to 1088x1920, pix_fmt: yuv420p
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] All info found
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] After avformat_find_stream_info() pos: 195211 bytes read:305951 seeks:2 frames:2
[2023-10-17 06:30:18.160] [debug] [Mapping :: callbackAddSubtitle] Initializing input audio...
[h264 @ 0x143604330] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x143604330] Decoding VUI
[h264 @ 0x143604330] nal_unit_type: 8(PPS), nal_ref_idc: 3
[2023-10-17 06:30:18.160] [debug] [Mapping :: callbackAddSubtitle] Initializing output video...
[h264 @ 0x143611ec0] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x143611ec0] Decoding VUI
[h264 @ 0x143611ec0] nal_unit_type: 8(PPS), nal_ref_idc: 3
[file @ 0x6000001f4000] Setting default whitelist 'file,crypto,data'
[2023-10-17 06:30:18.167] [debug] Pixel Format: YUV420P
[2023-10-17 06:30:18.167] [debug] [Mapping :: callbackAddSubtitle] Initializing output audio...
[2023-10-17 06:30:18.167] [debug] [Mapping :: callbackAddSubtitle] Initializing filters...
[2023-10-17 06:30:18.168] [debug] [Mapping :: callbackAddSubtitle] Buffer src args: video_size=1080x1920:pix_fmt=0:time_base=1/30000:pixel_aspect=1/1
detected 10 logical cores
[in @ 0x6000004ec0b0] Setting 'video_size' to value '1080x1920'
[in @ 0x6000004ec0b0] Setting 'pix_fmt' to value '0'
[in @ 0x6000004ec0b0] Setting 'time_base' to value '1/30000'
[in @ 0x6000004ec0b0] Setting 'pixel_aspect' to value '1/1'
[in @ 0x6000004ec0b0] w:1080 h:1920 pixfmt:yuv420p tb:1/30000 fr:0/1 sar:1/1
[AVFilterGraph @ 0x6000017e8000] Setting 'text' to value 'Legenda Adicionada Automaticamente Via FFMPEG e C++'
[AVFilterGraph @ 0x6000017e8000] Setting 'fontcolor' to value 'yellow'
[AVFilterGraph @ 0x6000017e8000] Setting 'bordercolor' to value 'black'
[AVFilterGraph @ 0x6000017e8000] Setting 'fontfile' to value '/Users/paulo/Downloads/roboto/Roboto-Black.ttf'
[AVFilterGraph @ 0x6000017e8000] query_formats: 3 queried, 2 merged, 0 already done, 0 delayed
[2023-10-17 06:30:18.172] [debug] [Mapping :: callbackAddSubtitle] Writing header...
[h264 @ 0x143604330] nal_unit_type: 6(SEI), nal_ref_idc: 0
[h264 @ 0x143604330] nal_unit_type: 5(IDR), nal_ref_idc: 3
[h264 @ 0x143604330] Format yuv420p chosen by get_format().
[h264 @ 0x143604330] Reinit context to 1088x1920, pix_fmt: yuv420p
[Parsed_drawtext_0 @ 0x6000004f4160] Copying data in avfilter.
[Parsed_drawtext_0 @ 0x6000004f4160] n:0 t:0.000000 text_w:424 text_h:16 x:0 y:0
[2023-10-17 06:30:18.182] [error] Error sending frame for encoding: Invalid argument
Returned Value: ERROR-SEND-FRAME-ENCODE



-
Slideshow from .jpg
13 octobre 2023, par DJArtyHave an command to make slideshow from *.jpg files :


ffmpeg -r 1/3 -f concat -safe 0 -i <(ls -v *.jpg | sed "s|^|file '$PWD/|") -vf "scale='min(1920,iw)':min'(1080,ih)':force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2" -c:v libx264 -pix_fmt yuv420p -r 60 -preset slow -crf 14 slideshow.mp4


Need good effects from one jpg to another like morphing or just fade in fade out, not just "one picture 3 sec, next picture 3 sec, etc."


need nicey slideshow