
Recherche avancée
Médias (91)
-
Les Miserables
9 décembre 2019, par
Mis à jour : Décembre 2019
Langue : français
Type : Textuel
-
VideoHandle
8 novembre 2019, par
Mis à jour : Novembre 2019
Langue : français
Type : Video
-
Somos millones 1
21 juillet 2014, par
Mis à jour : Juin 2015
Langue : français
Type : Video
-
Un test - mauritanie
3 avril 2014, par
Mis à jour : Avril 2014
Langue : français
Type : Textuel
-
Pourquoi Obama lit il mes mails ?
4 février 2014, par
Mis à jour : Février 2014
Langue : français
-
IMG 0222
6 octobre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Image
Autres articles (50)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (4560)
-
Add subtitles to multiple files at once
20 mai 2020, par Paul FilipencoI have a folder with episodes called
ep
and a folder with subtitles calledsub

Each episode has corresponding subtitles and i need to bulk add them with ffmpeg.


I've read that i can add subtitles with the following command :



ffmpeg -i video.avi -vf "ass=subtitle.ass" out.avi




But that only does it one file at a time.

Is there a bulk variant ?


Some useful info :

ls ep
prints


<series> - Ep<episode number="number">.mkv
</episode></series>



ls sub
prints


<series> - Ep<episode number="number">.ass
</episode></series>


-
Yet another ffmpeg/libx264 issue
26 décembre 2013, par kerim yucelMy current situation is ; ffmpeg and libx264 has been compiled for Android, as shared and static libraries respectively. Since I have libx264.a and libffmpeg.so with me, only thing that remains is to link them and obtain a ffmpeg library that would allow me to proceed with my application. However, some questions remain unanswered. I am using Ubuntu with a virtual machine under Windows 7. I am using x264's last version, ffmpeg 0.10.4 release and NDK 7.
I have tried to adjust flags (extracf and extrald) in order to include libx264 to ffmpeg compilation process as well, however I keep failing. Below you may find my build script for ffmpeg and the errors I have encountered.
NDK=~/Android_NDK_r7b
PLATFORM=$NDK/platforms/android-8/arch-arm/
PREBUILT=$NDK/toolchains/arm-linux-androideabi-4.4.3/prebuilt/linux-x86
x264=/usr/local
#x264v2=~/x264
function build_one
{
./configure --target-os=linux \
--prefix=$PREFIX \
--enable-cross-compile \
--extra-libs="-lgcc" \
--arch=arm \
--cc=$PREBUILT/bin/arm-linux-androideabi-gcc \
--cross-prefix=$PREBUILT/bin/arm-linux-androideabi- \
--nm=$PREBUILT/bin/arm-linux-androideabi-nm \
--sysroot=$PLATFORM \
# --extra-cflags=" -O3 -fpic -DANDROID -DHAVE_SYS_UIO_H=1 -Dipv6mr_interface=ipv6mr_ifindex -fasm -Wno-psabi -fno-short-enums -fno-strict-aliasing -finline-limit=300 $OPTIMIZE_CFLAGS " \
--extra-cflags="-I$x264/include" \
--enable-shared \
--enable-static \
#--extra-ldflags="-Wl,-rpath-link=$PLATFORM/usr/lib -L$PLATFORM/usr/lib -nostdlib -lc -lm -ldl -llog" \
--extra-ldflags="-L$x264/lib" \
--disable-everything \
# --enable-demuxer=mov \
# --enable-demuxer=h264 \
# --disable-ffplay \
--enable-gpl \
--enable-libx264 \
# --enable-protocol=file \
# --enable-avformat \
# --enable-avcodec \
# --enable-encoder=libx264 \
# --enable-decoder=rawvideo \
#--enable-decoder=mjpeg \
# --enable-decoder=h263 \
# --enable-decoder=mpeg4 \
# --enable-encoder=h264 \
# --disable-network \
#--enable-zlib \
# --disable-avfilter \
#--disable-avdevice \
$ADDITIONAL_CONFIGURE_FLAG
make clean
make -j4 install
$PREBUILT/bin/arm-linux-androideabi-ar d libavcodec/libavcodec.a inverse.o
$PREBUILT/bin/arm-linux-androideabi-ld -rpath-link=$PLATFORM/usr/lib -L$PLATFORM/usr/lib -soname libffmpeg.so -shared -nostdlib -z,noexecstack -Bsymbolic --whole-archive --no-undefined -o $PREFIX/libffmpeg.so libavcodec/libavcodec.a libavformat/libavformat.a libavutil/libavutil.a libswscale/libswscale.a -lc -lm -lz -ldl -llog --warn-once --dynamic-linker=/system/bin/linker $PREBUILT/lib/gcc/arm-linux-androideabi/4.4.3/libgcc.a
}
#arm v7vfpv3
CPU=armv7-a
OPTIMIZE_CFLAGS="-mfloat-abi=softfp -mfpu=vfpv3-d16 -marm -march=$CPU "
PREFIX=./androidIncludeTrialsNDK7/$CPU
ADDITIONAL_CONFIGURE_FLAG=
build_oneIf I basically delete the lines extracf and extrald flags and use other ones (commented outs), it works fine except libx264 not found error. Otherwise, I get the following erros.
./buildnew.sh: line 35: --extra-cflags=-I/usr/local/include: No such file or directory
./buildnew.sh: line 38: --extra-ldflags=-L/usr/local/lib: No such file or directory
./buildnew.sh: line 40: --disable-everything: command not found
./buildnew.sh: line 44: --enable-gpl: command not foundThe compilation process ends with the following.
make: *** [libavdevice/v4l.o] Error 1
make: *** Waiting for unfinished jobs....
/home/mehmet/Android_NDK_r7b/toolchains/arm-linux-androideabi-4.4.3/prebuilt/linux-x86/bin/arm-linux-androideabi-ar: creating libavcodec/libavcodec.a
/home/mehmet/Android_NDK_r7b/toolchains/arm-linux-androideabi-4.4.3/prebuilt/linux-x86/bin/arm-linux-androideabi-ld: cannot open output file ./androidIncludeTrialsNDK7/armv7-a/libffmpeg.so: No such file or directoryI have x264 installed in my directory /home/mehmet/x264 and I also check whereis x264.a, it shows me as /usr/local/lib. I have tried changing the path $x264 to point out to home/mehmet/x264 folder, but I get the same error.
Lastly, if I enable both extracf and extrald flags ( both commented ones and the used ones) . I get the same error mentioned above. I am afraid I am doing a simple typo here, but can`t see it and it is driving me crazy. Thanks a lot for your help.
Best.
EDIT
I have deleted the comments and now obtain the following.
./buildnew.sh: 4: ./buildnew.sh: function: not found
ERROR: libx264 not foundAbove error is observed from the terminal when I run the script. It build upto some point and finishes the process with the following error.
libavcodec/libavcodec.a(libx264.o): In function `X264_frame':
/home/mehmet/ffmpeg-0.10.4/libavcodec/libx264.c:159: undefined reference to `x264_picture_init'
/home/mehmet/ffmpeg-0.10.4/libavcodec/libx264.c:179: undefined reference to `x264_encoder_reconfig'
/home/mehmet/ffmpeg-0.10.4/libavcodec/libx264.c:191: undefined reference to `x264_encoder_encode'
/home/mehmet/ffmpeg-0.10.4/libavcodec/libx264.c:197: undefined reference to `x264_encoder_delayed_frames'
libavcodec/libavcodec.a(libx264.o): In function `encode_nals':
/home/mehmet/ffmpeg-0.10.4/libavcodec/libx264.c:196: undefined reference to `x264_bit_depth'
libavcodec/libavcodec.a(libx264.o): In function `X264_close':
/home/mehmet/ffmpeg-0.10.4/libavcodec/libx264.c:231: undefined reference to `x264_encoder_close'
libavcodec/libavcodec.a(libx264.o): In function `X264_init':
/home/mehmet/ffmpeg-0.10.4/libavcodec/libx264.c:284: undefined reference to `x264_param_default'
/home/mehmet/ffmpeg-0.10.4/libavcodec/libx264.c:292: undefined reference to `x264_param_default_preset'
/home/mehmet/ffmpeg-0.10.4/libavcodec/libx264.c:305: undefined reference to `x264_param_parse'
/home/mehmet/ffmpeg-0.10.4/libavcodec/libx264.c:502: undefined reference to `x264_param_apply_fastfirstpass'
/home/mehmet/ffmpeg-0.10.4/libavcodec/libx264.c:505: undefined reference to `x264_param_apply_profile'
/home/mehmet/ffmpeg-0.10.4/libavcodec/libx264.c:544: undefined reference to `x264_encoder_open_125'
/home/mehmet/ffmpeg-0.10.4/libavcodec/libx264.c:554: undefined reference to `x264_encoder_headers'
./buildnew.sh: 51: ./buildnew.sh: build_one: not found -
Resampling audio with FFMPEG LibAV
22 septembre 2020, par FennecFixWell, since FFMPEG documentation and code examples are absolute garbage, I guess my only choise is to go here and aks.


So what I'm trying to do is simply record audio from microphione and write it to the file. So I initialize my input and out formats, I get an audio packet decode it, resample, encode and write. But everytime I try to play and audio there's only a stub of data. It seems like for some reason it writes only a start packet. Which is still very strange and let me explain why :


if((response = swr_config_frame(resampleContext, audioOutputFrame, frame) < 0)) qDebug() << "can't configure frame!" << av_make_error(response);

if((response = swr_convert_frame(resampleContext, audioOutputFrame, frame) < 0)) qDebug() << "can't resample frame!" << av_make_error(response);



Here's the code I'm using to resample. My
frame
has data butswr_convert_frame
writes empty data toaudioOutputFrame


How do I fix that ? FFMPEG literally driving me crazy.


Here's the full code of my class


VideoReader.h


#ifndef VIDEOREADER_H
#define VIDEOREADER_H

extern "C"
{
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libswscale></libswscale>swscale.h>
#include <libavdevice></libavdevice>avdevice.h>
#include "libavutil/audio_fifo.h"
#include "libavformat/avio.h"
#include "libswresample/swresample.h"
#include 
}

#include <qstring>
#include <qelapsedtimer>

class VideoReader
{
public:
 VideoReader();

 bool open(const char* filename);
 bool fillFrame();
 bool readFrame(uint8_t *&frameData);
 void close();

 int width, height;

private:
 bool configInput();
 bool configOutput(const char *filename);
 bool configResampler();

 bool encode(AVFrame *frame, AVCodecContext *encoderContext, AVPacket *outputPacket, int streamIndex, QString type);

 int audioStreamIndex = -1;
 int videoStreamIndex = -1;

 int64_t videoStartPts = 0;
 int64_t audioStartPts = 0;

 AVFormatContext* inputFormatContext = nullptr;
 AVFormatContext* outputFormatContext = nullptr;

 AVCodecContext* videoDecoderContext = nullptr;
 AVCodecContext* videoEncoderContext = nullptr;

 AVCodecContext* audioDecoderContext = nullptr;
 AVCodecContext* audioEncoderContext = nullptr;

 AVFrame* videoInputFrame = nullptr;
 AVFrame* audioInputFrame = nullptr;

 AVFrame* videoOutputFrame = nullptr;
 AVFrame* audioOutputFrame = nullptr;

 AVPacket* inputPacket = nullptr;

 AVPacket* videoOutputPacket = nullptr;
 AVPacket* audioOutputPacket = nullptr;

 SwsContext* innerScaleContext = nullptr;
 SwsContext* outerScaleContext = nullptr;

 SwrContext *resampleContext = nullptr;
};

#endif // VIDEOREADER_H
</qelapsedtimer></qstring>


VideoReader.cpp


#include "VideoReader.h"

#include <qdebug>

static const char* av_make_error(int errnum)
{
 static char str[AV_ERROR_MAX_STRING_SIZE];
 memset(str, 0, sizeof(str));
 return av_make_error_string(str, AV_ERROR_MAX_STRING_SIZE, errnum);
}

VideoReader::VideoReader()
{

}

bool VideoReader::open(const char *filename)
{
 if(!configInput()) return false;
 if(!configOutput(filename)) return false;
 if(!configResampler()) return false;

 return true;
}

bool VideoReader::fillFrame()
{
 auto convertToYUV = [=](AVFrame* frame)
 {
 int response = 0;

 if((response = sws_scale(outerScaleContext, frame->data, frame->linesize, 0, videoEncoderContext->height, videoOutputFrame->data, videoOutputFrame->linesize)) < 0) qDebug() << "can't rescale" << av_make_error(response);
 };

 auto convertAudio = [this](AVFrame* frame)
 {
 int response = 0;

 auto& out = audioOutputFrame;
 qDebug() << out->linesize[0] << out->nb_samples;
 if((response = swr_convert_frame(resampleContext, audioOutputFrame, frame)) < 0) qDebug() << "can't resample frame!" << av_make_error(response);
 qDebug() << "poop";
 };

 auto decodeEncode = [=](AVPacket* inputPacket, AVFrame* inputFrame, AVCodecContext* decoderContext,
 AVPacket* outputPacket, AVFrame* outputFrame, AVCodecContext* encoderContext,
 std::function<void> convertFunc,
 int streamIndex, int64_t startPts, QString type)
 {
 int response = avcodec_send_packet(decoderContext, inputPacket);
 if(response < 0) { qDebug() << "failed to send" << type << "packet!" << av_make_error(response); return false; }

 response = avcodec_receive_frame(decoderContext, inputFrame);
 if(response == AVERROR(EAGAIN) || response == AVERROR_EOF) { av_packet_unref(inputPacket); return false; }
 else if (response < 0) { qDebug() << "failed to decode" << type << "frame!" << response << av_make_error(response); return false; }

 if(encoderContext)
 {
 outputFrame->pts = inputPacket->pts - startPts;

 convertFunc(inputFrame);
 if(!encode(outputFrame, encoderContext, outputPacket, streamIndex, type)) return false;
 }

 av_packet_unref(inputPacket);

 return true;
 };

 while(av_read_frame(inputFormatContext, inputPacket) >= 0) //actually read packet
 {
 if(inputPacket->stream_index == videoStreamIndex)
 {
 if(!videoStartPts) videoStartPts = inputPacket->pts;
 if(decodeEncode(inputPacket, videoInputFrame, videoDecoderContext, videoOutputPacket, videoOutputFrame, videoEncoderContext, convertToYUV, videoStreamIndex, videoStartPts, "video")) break;
 }
 else if(inputPacket->stream_index == audioStreamIndex)
 {
 if(!audioStartPts) audioStartPts = inputPacket->pts;
 if(decodeEncode(inputPacket, audioInputFrame, audioDecoderContext, audioOutputPacket, audioOutputFrame, audioEncoderContext, convertAudio, audioStreamIndex, audioStartPts, "audio")) break;
 }
 }

 return true;
}

bool VideoReader::readFrame(uint8_t *&frameData)
{
 if(!fillFrame()) { qDebug() << "readFrame method failed!"; return false; };

 const int bytesPerPixel = 4;

 uint8_t* destination[bytesPerPixel] = {frameData, NULL, NULL, NULL};
 int destinationLinesize[bytesPerPixel] = { videoInputFrame->width * bytesPerPixel, 0, 0, 0};

 sws_scale(innerScaleContext, videoInputFrame->data, videoInputFrame->linesize, 0, videoInputFrame->height, destination, destinationLinesize);

 return true;
}

void VideoReader::close()
{
 encode(NULL, videoEncoderContext, videoOutputPacket, videoStreamIndex, "video");
 encode(NULL, audioEncoderContext, audioOutputPacket, audioStreamIndex, "audio");

 if(av_write_trailer(outputFormatContext) < 0) { qDebug() << "failed to write trailer"; };

 avformat_close_input(&outputFormatContext);
 avformat_free_context(outputFormatContext);
 avformat_close_input(&inputFormatContext);
 avformat_free_context(inputFormatContext);

 av_frame_free(&videoInputFrame);
 av_frame_free(&audioInputFrame);

 av_frame_free(&videoOutputFrame);
 av_frame_free(&audioOutputFrame);

 av_packet_free(&inputPacket);

 av_packet_free(&videoOutputPacket);
 av_packet_free(&audioOutputPacket);

 avcodec_free_context(&videoDecoderContext);
 avcodec_free_context(&videoEncoderContext);

 avcodec_free_context(&audioDecoderContext);
 avcodec_free_context(&audioEncoderContext);

 sws_freeContext(innerScaleContext);
 sws_freeContext(outerScaleContext);

 swr_free(&resampleContext);
}

bool VideoReader::configInput()
{
 avdevice_register_all();

 inputFormatContext = avformat_alloc_context();

 if(!inputFormatContext) { qDebug() << "can't create context!"; return false; }

 const char* inputFormatName = "dshow";/*"gdigrab"*/
 AVInputFormat* inputFormat = av_find_input_format(inputFormatName);

 if(!inputFormat){ qDebug() << "Can't find" << inputFormatName; return false; }

 AVDictionary* options = NULL;
 av_dict_set(&options, "framerate", "30", 0);
 av_dict_set(&options, "video_size", "1920x1080", 0);

 if(avformat_open_input(&inputFormatContext, "video=HD USB Camera:audio=Microphone (High Definition Audio Device)" /*"desktop"*/, inputFormat, &options) != 0) { qDebug() << "can't open video file!"; return false; }

 AVCodecParameters* videoCodecParams = nullptr;
 AVCodecParameters* audioCodecParams = nullptr;
 AVCodec* videoDecoder = nullptr;
 AVCodec* audioDecoder = nullptr;

 for (uint i = 0; i < inputFormatContext->nb_streams; ++i)
 {
 auto stream = inputFormatContext->streams[i];
 auto codecParams = stream->codecpar;

 if(codecParams->codec_type == AVMEDIA_TYPE_AUDIO) { audioStreamIndex = i; audioDecoder = avcodec_find_decoder(codecParams->codec_id); audioCodecParams = codecParams; }
 if(codecParams->codec_type == AVMEDIA_TYPE_VIDEO) { videoStreamIndex = i; videoDecoder = avcodec_find_decoder(codecParams->codec_id); videoCodecParams = codecParams; }

 if(audioStreamIndex != -1 && videoStreamIndex != -1) break;
 }

 if(audioStreamIndex == -1) { qDebug() << "failed to find audio stream inside file"; return false; }
 if(videoStreamIndex == -1) { qDebug() << "failed to find video stream inside file"; return false; }

 auto configureCodecContext = [=](AVCodecContext*& context, AVCodec* decoder, AVCodecParameters* params, AVFrame*& frame, QString type)
 {
 context = avcodec_alloc_context3(decoder);
 if(!context) { qDebug() << "failed to create" << type << "decoder context!"; return false; }

 if(avcodec_parameters_to_context(context, params) < 0) { qDebug() << "can't initialize input" << type << "decoder context"; return false; }

 if(avcodec_open2(context, decoder, NULL) < 0) { qDebug() << "can't open" << type << "decoder"; return false; }

 frame = av_frame_alloc();
 if(!frame) { qDebug() << "can't allocate" << type << "frame"; return false; }

 return true;
 };

 if(!configureCodecContext(videoDecoderContext, videoDecoder, videoCodecParams, videoInputFrame, "video")) return false;
 if(!configureCodecContext(audioDecoderContext, audioDecoder, audioCodecParams, audioInputFrame, "audio")) return false;

 audioDecoderContext->channel_layout = AV_CH_LAYOUT_STEREO;
 audioInputFrame->channel_layout = audioDecoderContext->channel_layout;

 inputPacket = av_packet_alloc();
 if(!inputPacket) { qDebug() << "can't allocate input packet!"; return false; }

 //first frame, needed fo initialization
 if(!fillFrame()) { qDebug() << "Failed to fill frame on init!"; return false; };

 width = videoDecoderContext->width;
 height = videoDecoderContext->height;

 innerScaleContext = sws_getContext(width, height, videoDecoderContext->pix_fmt,
 width, height, AV_PIX_FMT_RGB0,
 SWS_FAST_BILINEAR,
 NULL,
 NULL,
 NULL);

 outerScaleContext = sws_getContext(width, height, videoDecoderContext->pix_fmt,
 width, height, AV_PIX_FMT_YUV420P,
 SWS_FAST_BILINEAR,
 NULL,
 NULL,
 NULL);


 if(!innerScaleContext) { qDebug() << "failed to initialize scaler context"; return false; }

 return true;
}

bool VideoReader::configOutput(const char *filename)
{
 avformat_alloc_output_context2(&outputFormatContext, NULL, NULL, filename);
 if(!outputFormatContext) { qDebug() << "failed to create output context"; return false; }

 AVOutputFormat* outputFormat = outputFormatContext->oformat;

 auto prepareOutputContext = [=](AVCodecContext*& encoderContext,
 std::function<void> configureContextFunc,
 std::function<void> configureFrameFunc,
 AVCodecID codecId, AVFrame*& frame, AVPacket*& packet, QString type)
 {
 auto stream = avformat_new_stream(outputFormatContext, NULL);
 if(!stream) { qDebug() << "failed to allocate output" << type << "stream"; return false; }

 AVCodec* encoder = avcodec_find_encoder(codecId);
 if(!encoder) { qDebug() << "failed to find" << type << "encoder!"; return false; }

 encoderContext = avcodec_alloc_context3(encoder);
 if(!encoderContext) { qDebug() << "failed to create video encoder context!"; return false; }

 configureContextFunc(encoderContext, encoder);

 int result = avcodec_open2(encoderContext, encoder, NULL);
 if(result < 0) { qDebug() << "failed to open audio encoder" << av_make_error(result); return false; }
 if(avcodec_parameters_from_context(stream->codecpar, encoderContext) < 0) { qDebug() << "failed to copy parameters to audio output stream"; return false; }

 packet = av_packet_alloc();
 if(!packet) {qDebug() << "failed allocate output" << type << "packet"; return false;}

 frame = av_frame_alloc();
 if(!frame) { qDebug() << "can't allocate output" << type << "frame"; return false; }

 configureFrameFunc(frame);

 av_frame_get_buffer(frame, 0);

 return true;
 };

 auto configureAudioFrame = [=](AVFrame* frame)
 {
 frame->nb_samples = audioEncoderContext->frame_size;
 frame->format = audioEncoderContext->sample_fmt;
 frame->sample_rate = audioEncoderContext->sample_rate;
 frame->channel_layout = av_get_default_channel_layout(audioDecoderContext->channels);
 };

 auto configureAudioEncoderContext = [=](AVCodecContext* encoderContext, AVCodec* encoder)
 {
 encoderContext->bit_rate = 64000;
 encoderContext->sample_fmt = encoder->sample_fmts[0];
 encoderContext->sample_rate = 44100;
 encoderContext->codec_type = AVMEDIA_TYPE_AUDIO;
 encoderContext->channel_layout = AV_CH_LAYOUT_STEREO;
 encoderContext->channels = av_get_channel_layout_nb_channels(encoderContext->channel_layout);
 };

 auto configureVideoFrame = [=](AVFrame* frame)
 {
 frame->format = videoEncoderContext->pix_fmt;
 frame->width = videoEncoderContext->width;
 frame->height = videoEncoderContext->height;
 };

 auto configureVideoEncoderContext = [=](AVCodecContext* encoderContext, AVCodec* encoder)
 {
 encoderContext->width = videoDecoderContext->width;
 encoderContext->height = videoDecoderContext->height;
 encoderContext->pix_fmt = encoder->pix_fmts[0];
 encoderContext->gop_size = 10;
 encoderContext->max_b_frames = 1;
 encoderContext->framerate = AVRational{30, 1};
 encoderContext->time_base = AVRational{1, 30};

 av_opt_set(encoderContext->priv_data, "preset", "ultrafast", 0);
 av_opt_set(encoderContext->priv_data, "tune", "zerolatency", 0);
 };

 if(!prepareOutputContext(videoEncoderContext, configureVideoEncoderContext, configureVideoFrame, outputFormat->video_codec, videoOutputFrame, videoOutputPacket, "video")) return false;
 if(!prepareOutputContext(audioEncoderContext, configureAudioEncoderContext, configureAudioFrame, outputFormat->audio_codec, audioOutputFrame, audioOutputPacket, "audio")) return false;

 if(outputFormat->flags & AVFMT_GLOBALHEADER) outputFormat->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

 int result = 0;
 if(!(outputFormat->flags & AVFMT_NOFILE))
 if((result = avio_open(&outputFormatContext->pb, filename, AVIO_FLAG_WRITE)) < 0)
 { qDebug() << "failed to open file" << av_make_error(result); return false; }

 result = avformat_write_header(outputFormatContext, NULL);
 if(result < 0) {qDebug() << "failed to write header!" << av_make_error(result); return false; }

 return true;
}

bool VideoReader::configResampler()
{

 resampleContext = swr_alloc_set_opts(NULL,
 av_get_default_channel_layout(audioEncoderContext->channels),
 audioEncoderContext->sample_fmt,
 audioEncoderContext->sample_rate,
 av_get_default_channel_layout(audioDecoderContext->channels),
 audioDecoderContext->sample_fmt,
 audioDecoderContext->sample_rate,
 0, NULL);
 if (!resampleContext) { qDebug() << "Could not allocate resample context"; return false; }

 int error;
 if ((error = swr_init(resampleContext)) < 0) { qDebug() << "Could not open resample context"; swr_free(&resampleContext); return false; }

 return true;
}

bool VideoReader::encode(AVFrame* frame, AVCodecContext* encoderContext, AVPacket* outputPacket, int streamIndex, QString type)
{
 int response;

 response = avcodec_send_frame(encoderContext, frame);
 if(response < 0) { qDebug() << "failed to send" << type << "frame" << av_make_error(response); return false; }

 while(response >= 0)
 {
 response = avcodec_receive_packet(encoderContext, outputPacket);
 if(response == AVERROR(EAGAIN) || response == AVERROR_EOF) { av_packet_unref(outputPacket); continue; }
 else if (response < 0) { qDebug() << "failed to encode" << type << "frame!" << response << av_make_error(response); return false; }

 outputPacket->stream_index = streamIndex;

 AVStream *inStream = inputFormatContext->streams[streamIndex];
 AVStream *outStream = outputFormatContext->streams[streamIndex];

 av_packet_rescale_ts(outputPacket, inStream->time_base, outStream->time_base);

 if((response = av_interleaved_write_frame(outputFormatContext, outputPacket)) != 0) { qDebug() << "Failed to write" << type << "packet!" << av_make_error(response); av_packet_unref(outputPacket); return false; }

 av_packet_unref(outputPacket);
 }

 return true;
}
</void></void></void></qdebug>


I could try to write down shorter example if needed