
Recherche avancée
Médias (91)
-
Valkaama DVD Cover Outside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Valkaama DVD Cover Inside
4 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
1,000,000
27 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Demon Seed
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
The Four of Us are Dying
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (50)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...)
Sur d’autres sites (9375)
-
FFMPEG avformat_open_input returning AVERROR_INVALIDDATA [on hold]
10 août 2016, par Victor.dMdBI’m trying to use ffmpeg to take in video data from a buffer but keep on getting the same error.
I’ve implemented the same code as described here :
http://www.ffmpeg.org/doxygen/trunk/doc_2examples_2avio_reading_8c-example.htmlHere is the code (I’ve tried removing the unnecessary bits)
VideoInputFile *video_input_file;
VideoDataConf *video_data_conf;
video_data_conf->width = 1920;
video_data_conf->height = 1080;
int vbuf_size = 9 * cmd_data_ptr->video_data_conf.width * cmd_data_ptr->video_data_conf.height + 10000;
uint8_t *buffer = (uint8_t *) av_malloc(vbuf_size);
video_data_conf->input_ptr.ptr = buffer;
video_data_conf->input_ptr.size = vbuf_size;
strncpy(video_data_conf->filename, "localmem");
video_input_file->vbuf_size = 9 * video_data_conf->width * video_data_conf->height + 10000;
video_input_file->vbuf = (uint8_t *) av_malloc(video_input_file->vbuf_size);
video_input_file->av_io_ctx = avio_alloc_context(video_input_file->vbuf, video_input_file->vbuf_size, 0, &video_data_conf->input_ptr, &read_function, NULL, NULL);
if ( !video_input_file->av_io_ctx ) {
fprintf(stdout,"Failed to create the buffer avio context\n");
}
video_input_file->av_fmt_ctx = avformat_alloc_context();
if ( !video_input_file->av_fmt_ctx ) {
printf(stdout,"Failed to create the video avio context\n");
}
video_input_file->av_fmt_ctx->pb = video_input_file->av_io_ctx;
open_res = avformat_open_input(&video_input_file->av_fmt_ctx, video_data_conf->filename, NULL, NULL);Read function :
static int read_function(void* opaque, uint8_t* buf, int buf_size) {
BufferData *bd = (BufferData *) opaque;
buf_size = FFMIN(buf_size, bd->size);
memcpy(buf, bd->ptr, buf_size);
bd->ptr += buf_size;
bd->size -= buf_size;
return buf_size;
}BufferData structure
typedef struct {
uint8_t *ptr;
size_t size; ///< size left in the buffer
} BufferData;It starts to work if I initialise the buffer and vbuf_size with a real file like so :
uint8_t *buffer;
size_t vbuf_size;
av_file_map("/path/to/image.png", &buffer, &vbuf_size, 0, NULL); -
Downscaling a video from 1080p to 480p using swscale and encoding to x265 gives a glitched output
5 mai 2023, par lokit khemkaI am basically first scaling a frame and then sending the frame to the encoder as below :


scaled_frame->pts = input_frame->pts;
scaled_frame->pkt_dts = input_frame->pkt_dts;
scaled_frame->pict_type = input_frame->pict_type;
sws_scale_frame(encoder->sws_ctx, scaled_frame, input_frame);
if (encode_video(decoder, encoder, scaled_frame))
 return -1;



The scaling context is configured as :


scaled_frame->width = 854;
scaled_frame->height=480; 
encoder->sws_ctx = sws_getContext(1920, 1080,
 decoder->video_avcc->pix_fmt, 
 scaled_frame->width, scaled_frame->height, decoder->video_avcc->pix_fmt, SWS_BICUBIC, NULL, NULL, NULL );
 if (!encoder->sws_ctx){logging("Cannot Create Scaling Context."); return -1;}



The encoder is configured as :


encoder_sc->video_avcc->height = decoder_ctx->height; //1080
 encoder_sc->video_avcc->width = decoder_ctx->width; //1920
 encoder_sc->video_avcc->bit_rate = 2 * 1000 * 1000;
 encoder_sc->video_avcc->rc_buffer_size = 4 * 1000 * 1000;
 encoder_sc->video_avcc->rc_max_rate = 2 * 1000 * 1000;
 encoder_sc->video_avcc->rc_min_rate = 2.5 * 1000 * 1000;

 encoder_sc->video_avcc->time_base = av_inv_q(input_framerate);
 encoder_sc->video_avs->time_base = encoder_sc->video_avcc->time_base;



When I get the output, the output video is 1080p and I have glitches like :


I changed the encoder avcc resolution to 480p (854 x 480). However, that is causing the video to get sliced to the top quarter of the original frame.
I am new to FFMPEG and video processing in general.


EDIT : I am adding the minimal reproducible code sample. However, it is really long because I need to include code for decoding, scaling and then encoding because the possible error is either in scaling or encoding :


#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>timestamp.h>
#include <libavutil></libavutil>opt.h>
#include <libswscale></libswscale>swscale.h>

#include 
#include 

typedef struct StreamingContext{
 AVFormatContext* avfc;
 AVCodec *video_avc;
 AVCodec *audio_avc;
 AVStream *video_avs;
 AVStream *audio_avs;
 AVCodecContext *video_avcc;
 AVCodecContext *audio_avcc;
 int video_index;
 int audio_index;
 char* filename;
 struct SwsContext *sws_ctx;
}StreamingContext;


typedef struct StreamingParams{
 char copy_video;
 char copy_audio;
 char *output_extension;
 char *muxer_opt_key;
 char *muxer_opt_value;
 char *video_codec;
 char *audio_codec;
 char *codec_priv_key;
 char *codec_priv_value;
}StreamingParams;

void logging(const char *fmt, ...)
{
 va_list args;
 fprintf(stderr, "LOG: ");
 va_start(args, fmt);
 vfprintf(stderr, fmt, args);
 va_end(args);
 fprintf(stderr, "\n");
}

int fill_stream_info(AVStream *avs, AVCodec **avc, AVCodecContext **avcc)
{
 *avc = avcodec_find_decoder(avs->codecpar->codec_id);
 if (!*avc)
 {
 logging("Failed to find the codec.\n");
 return -1;
 }

 *avcc = avcodec_alloc_context3(*avc);
 if (!*avcc)
 {
 logging("Failed to alloc memory for codec context.");
 return -1;
 }

 if (avcodec_parameters_to_context(*avcc, avs->codecpar) < 0)
 {
 logging("Failed to fill Codec Context.");
 return -1;
 }

 if (avcodec_open2(*avcc, *avc, NULL) < 0)
 {
 logging("Failed to open Codec.");
 return -1;
 }

 return 0;
}

int open_media(const char *in_filename, AVFormatContext **avfc)
{
 *avfc = avformat_alloc_context();

 if (!*avfc)
 {
 logging("Failed to Allocate Memory for Format Context");
 return -1;
 }

 if (avformat_open_input(avfc, in_filename, NULL, NULL) != 0)
 {
 logging("Failed to open input file %s", in_filename);
 return -1;
 }

 if (avformat_find_stream_info(*avfc, NULL) < 0)
 {
 logging("Failed to get Stream Info.");
 return -1;
 }
}

int prepare_decoder(StreamingContext *sc)
{
 for (int i = 0; i < sc->avfc->nb_streams; i++)
 {
 if (sc->avfc->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
 {
 sc->video_avs = sc->avfc->streams[i];
 sc->video_index = i;

 if (fill_stream_info(sc->video_avs, &sc->video_avc, &sc->video_avcc))
 {
 return -1;
 }
 }
 else
 {
 logging("Skipping Streams other than Video.");
 }
 }
 return 0;
}

int prepare_video_encoder(StreamingContext *encoder_sc, AVCodecContext *decoder_ctx, AVRational input_framerate,
 StreamingParams sp)
{
 encoder_sc->video_avs = avformat_new_stream(encoder_sc->avfc, NULL);
 encoder_sc->video_avc = avcodec_find_encoder_by_name(sp.video_codec);
 if (!encoder_sc->video_avc)
 {
 logging("Cannot find the Codec.");
 return -1;
 }

 encoder_sc->video_avcc = avcodec_alloc_context3(encoder_sc->video_avc);
 if (!encoder_sc->video_avcc)
 {
 logging("Could not allocate memory for Codec Context.");
 return -1;
 }

 av_opt_set(encoder_sc->video_avcc->priv_data, "preset", "fast", 0);
 if (sp.codec_priv_key && sp.codec_priv_value)
 av_opt_set(encoder_sc->video_avcc->priv_data, sp.codec_priv_key, sp.codec_priv_value, 0);

 encoder_sc->video_avcc->height = decoder_ctx->height;
 encoder_sc->video_avcc->width = decoder_ctx->width;
 encoder_sc->video_avcc->sample_aspect_ratio = decoder_ctx->sample_aspect_ratio;

 if (encoder_sc->video_avc->pix_fmts)
 encoder_sc->video_avcc->pix_fmt = encoder_sc->video_avc->pix_fmts[0];
 else
 encoder_sc->video_avcc->pix_fmt = decoder_ctx->pix_fmt;

 encoder_sc->video_avcc->bit_rate = 2 * 1000 * 1000;
 encoder_sc->video_avcc->rc_buffer_size = 4 * 1000 * 1000;
 encoder_sc->video_avcc->rc_max_rate = 2 * 1000 * 1000;
 encoder_sc->video_avcc->rc_min_rate = 2.5 * 1000 * 1000;

 encoder_sc->video_avcc->time_base = av_inv_q(input_framerate);
 encoder_sc->video_avs->time_base = encoder_sc->video_avcc->time_base;

 

 if (avcodec_open2(encoder_sc->video_avcc, encoder_sc->video_avc, NULL) < 0)
 {
 logging("Could not open the Codec.");
 return -1;
 }
 avcodec_parameters_from_context(encoder_sc->video_avs->codecpar, encoder_sc->video_avcc);
 return 0;
}

int encode_video(StreamingContext *decoder, StreamingContext *encoder, AVFrame *input_frame)
{
 if (input_frame)
 input_frame->pict_type = AV_PICTURE_TYPE_NONE;

 AVPacket *output_packet = av_packet_alloc();
 if (!output_packet)
 {
 logging("Could not allocate memory for Output Packet.");
 return -1;
 }

 int response = avcodec_send_frame(encoder->video_avcc, input_frame);

 while (response >= 0)
 {
 response = avcodec_receive_packet(encoder->video_avcc, output_packet);
 if (response == AVERROR(EAGAIN) || response == AVERROR_EOF)
 {
 break;
 }
 else if (response < 0)
 {
 logging("Error while receiving packet from encoder: %s", av_err2str(response));
 return -1;
 }

 output_packet->stream_index = decoder->video_index;
 output_packet->duration = encoder->video_avs->time_base.den / encoder->video_avs->time_base.num / decoder->video_avs->avg_frame_rate.num * decoder->video_avs->avg_frame_rate.den;

 av_packet_rescale_ts(output_packet, decoder->video_avs->time_base, encoder->video_avs->time_base);
 response = av_interleaved_write_frame(encoder->avfc, output_packet);
 if (response != 0)
 {
 logging("Error %d while receiving packet from decoder: %s", response, av_err2str(response));
 return -1;
 }
 }

 av_packet_unref(output_packet);
 av_packet_free(&output_packet);

 return 0;
}

int transcode_video(StreamingContext *decoder, StreamingContext *encoder, AVPacket *input_packet, AVFrame *input_frame, AVFrame *scaled_frame)
{
 int response = avcodec_send_packet(decoder->video_avcc, input_packet);
 if (response < 0)
 {
 logging("Error while sending the Packet to Decoder: %s", av_err2str(response));
 return response;
 }

 while (response >= 0)
 {
 response = avcodec_receive_frame(decoder->video_avcc, input_frame);
 
 if (response == AVERROR(EAGAIN) || response == AVERROR_EOF)
 {
 break;
 }
 else if (response < 0)
 {
 logging("Error while receiving frame from Decoder: %s", av_err2str(response));
 return response;
 }
 if (response >= 0)
 {
 scaled_frame->pts = input_frame->pts;
 scaled_frame->pkt_dts = input_frame->pkt_dts;
 scaled_frame->pict_type = input_frame->pict_type;
 sws_scale_frame(encoder->sws_ctx, scaled_frame, input_frame);
 if (encode_video(decoder, encoder, scaled_frame))
 return -1;
 }

 av_frame_unref(input_frame);
 }
 return 0;
}

int main(int argc, char *argv[])
{
 StreamingParams sp = {0};
 sp.copy_audio = 1;
 sp.copy_video = 0;
 sp.video_codec = "libx265";


 StreamingContext *decoder = (StreamingContext *)calloc(1, sizeof(StreamingContext));
 decoder->filename = argv[1];

 StreamingContext *encoder = (StreamingContext *)calloc(1, sizeof(StreamingContext));
 encoder->filename = argv[2];

 if (sp.output_extension)
 {
 strcat(encoder->filename, sp.output_extension);
 }

 if (open_media(decoder->filename, &decoder->avfc))
 return -1;
 if (prepare_decoder(decoder))
 return -1;

 avformat_alloc_output_context2(&encoder->avfc, NULL, NULL, encoder->filename);
 if (!encoder->avfc)
 {
 logging("Could not allocate memory for output Format Context.");
 return -1;
 }

 AVRational input_framerate = av_guess_frame_rate(decoder->avfc, decoder->video_avs, NULL);
 prepare_video_encoder(encoder, decoder->video_avcc, input_framerate, sp);


 if (encoder->avfc->oformat->flags & AVFMT_GLOBALHEADER)
 encoder->avfc->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

 if (!(encoder->avfc->oformat->flags & AVFMT_NOFILE))
 {
 if (avio_open(&encoder->avfc->pb, encoder->filename, AVIO_FLAG_WRITE) < 0)
 {
 logging("could not open the output file");
 return -1;
 }
 }

 AVDictionary *muxer_opts = NULL;

 if (sp.muxer_opt_key && sp.muxer_opt_value)
 {
 av_dict_set(&muxer_opts, sp.muxer_opt_key, sp.muxer_opt_value, 0);
 }

 if (avformat_write_header(encoder->avfc, &muxer_opts) < 0)
 {
 logging("an error occurred when opening output file");
 return -1;
 }

 AVFrame *input_frame = av_frame_alloc();
 AVFrame *scaled_frame = av_frame_alloc();
 if (!input_frame || !scaled_frame)
 {
 logging("Failed to allocate memory for AVFrame");
 return -1;
 }

 // scaled_frame->format = AV_PIX_FMT_YUV420P;
 scaled_frame->width = 854;
 scaled_frame->height=480; 

 //Creating Scaling Context
 encoder->sws_ctx = sws_getContext(1920, 1080,
 decoder->video_avcc->pix_fmt, 
 scaled_frame->width, scaled_frame->height, decoder->video_avcc->pix_fmt, SWS_BICUBIC, NULL, NULL, NULL );
 if (!encoder->sws_ctx){logging("Cannot Create Scaling Context."); return -1;}


 AVPacket *input_packet = av_packet_alloc();
 if (!input_packet)
 {
 logging("Failed to allocate memory for AVPacket.");
 return -1;
 }

 while (av_read_frame(decoder->avfc, input_packet) >= 0)
 {
 if (decoder->avfc->streams[input_packet->stream_index]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
 {
 if (transcode_video(decoder, encoder, input_packet, input_frame, scaled_frame))
 return -1;
 av_packet_unref(input_packet);
 }
 else
 {
 logging("Ignoring all nonvideo packets.");
 }
 }

 if (encode_video(decoder, encoder, NULL))
 return -1;

 av_write_trailer(encoder->avfc);

 if (muxer_opts != NULL)
 {
 av_dict_free(&muxer_opts);
 muxer_opts = NULL;
 }

 if (input_frame != NULL)
 {
 av_frame_free(&input_frame);
 input_frame = NULL;
 }

 if (input_packet != NULL)
 {
 av_packet_free(&input_packet);
 input_packet = NULL;
 }

 avformat_close_input(&decoder->avfc);

 avformat_free_context(decoder->avfc);
 decoder->avfc = NULL;
 avformat_free_context(encoder->avfc);
 encoder->avfc = NULL;

 avcodec_free_context(&decoder->video_avcc);
 decoder->video_avcc = NULL;
 avcodec_free_context(&decoder->audio_avcc);
 decoder->audio_avcc = NULL;

 free(decoder);
 decoder = NULL;
 free(encoder);
 encoder = NULL;

 return 0;
}



The video I am using for testing is available at the repo : https://github.com/leandromoreira/ffmpeg-libav-tutorial


The file name is small_bunny_1080p_60fps.mp4


-
Overlaying a text stream on a video stream with ffmpeg in Node.js
16 mai 2023, par TchouneI am creating a streaming system with Node.js that uses ffmpeg to send video and text streams to a local RTMP server, then combines those streams and sends them to Twitch.


I'm using canvas to create a text image with a transparent background, and I need to change that text every time a new video in the playlist starts.


Currently in stream I see only the video stream of my video and not the text. But if I go via VLC to see each more separate, I see them


However, I'm running into a problem where the text stream doesn't appear in the final video stream on Twitch. In addition, I get the following error message :


Combine stderr: [NULL @ 0x1407069f0] Unable to find a suitable output format for 'rtmp://live.twitch.tv/app/streamKey'
rtmp://live.twitch.tv/app/streamKey: Invalid argument



Here is my current Node.js code :



const createTextImage = (runner) => {
 return new Promise((resolve, reject) => {
 const canvas = createCanvas(1920, 1080);
 const context = canvas.getContext('2d');

 // Fill the background with transparency
 context.fillStyle = 'rgba(0,0,0,0)';
 context.fillRect(0, 0, canvas.width, canvas.height);

 // Set the text options
 context.fillStyle = '#ffffff';
 context.font = '24px Arial';
 context.textAlign = 'start';
 context.textBaseline = 'middle';

 // Draw the text
 context.fillText(`Speedrun by ${runner}`, canvas.width / 2, canvas.height / 2);

 // Define the images directory
 const imagesDir = path.join(__dirname, 'images', 'runners');

 // Ensure the images directory exists
 fs.mkdirSync(imagesDir, { recursive: true });

 // Define the file path
 const filePath = path.join(imagesDir, runner + '.png');

 // Create the write stream
 const out = fs.createWriteStream(filePath);

 // Create the PNG stream
 const stream = canvas.createPNGStream();

 // Pipe the PNG stream to the write stream
 stream.pipe(out);

 out.on('finish', () => {
 console.log('The PNG file was created.');
 resolve();
 });

 out.on('error', reject);
 });
}
const streamVideo = (video) => {
 ffmpegLibrary.ffprobe(video.video, function (err, metadata) {
 if (err) {
 console.error(err);
 return;
 }
 currentVideoDuration = metadata.format.duration;

 // Annulez le délai précédent avant d'en créer un nouveau
 if (nextVideoTimeoutId) {
 clearTimeout(nextVideoTimeoutId);
 }

 // Déplacez votre appel setTimeout ici
 nextVideoTimeoutId = setTimeout(() => {
 console.log('Fin de la vidéo, passage à la suivante...');
 nextVideo();
 }, currentVideoDuration * 1000 + 10000);
 })


 ffmpegVideo = childProcess.spawn('ffmpeg', [
 '-nostdin', '-re', '-f', 'concat', '-safe', '0', '-i', 'playlist.txt',
 '-vcodec', 'libx264',
 '-s', '1920x1080',
 '-r', '30',
 '-b:v', '5000k',
 '-acodec', 'aac',
 '-preset', 'veryfast',
 '-f', 'flv',
 `rtmp://localhost:1935/live/video` // envoie le flux vidéo au serveur rtmp local
 ]);

 createTextImage(video.runner).then(() => {
 ffmpegText = childProcess.spawn('ffmpeg', [
 '-nostdin', '-re',
 '-loop', '1', '-i', `images/runners/${video.runner}.png`, // Utilise l'image créée par Puppeteer
 '-vcodec', 'libx264rgb', // Utilise le codec PNG pour conserver la transparence
 '-s', '1920x1080',
 '-r', '30',
 '-b:v', '5000k',
 '-acodec', 'aac',
 '-preset', 'veryfast',
 '-f', 'flv',
 `rtmp://localhost:1935/live/text` // envoie le flux de texte au serveur rtmp local
 ]);

 ffmpegText.stdout.on('data', (data) => {
 console.log(`text stdout: ${data}`);
 });

 ffmpegText.stderr.on('data', (data) => {
 console.error(`text stderr: ${data}`);
 });
 }).catch(error => {
 console.error(`Erreur lors de la création de l'image de texte: ${error}`);
 });

 ffmpegCombine = childProcess.spawn('ffmpeg', [
 '-i', 'rtmp://localhost:1935/live/video',
 '-i', 'rtmp://localhost:1935/live/text',
 '-filter_complex', '[0:v][1:v]overlay=main_w-overlay_w:0',
 '-s', '1920x1080',
 '-r', '30',
 '-vcodec', 'libx264',
 '-b:v', '5000k',
 '-acodec', 'aac',
 '-preset', 'veryfast',
 '-f', 'flv',
 `rtmp://live.twitch.tv/app/${twitchStreamKey}` // envoie le flux combiné à Twitch
 ]);

 ffmpegVideo.stdout.on('data', (data) => {
 console.log(`video stdout: ${data}`);
 });

 ffmpegVideo.stderr.on('data', (data) => {
 console.error(`video stderr: ${data}`);
 });

 ffmpegCombine.stdout.on('data', (data) => {
 console.log(`Combine stdout: ${data}`);
 });

 ffmpegCombine.stderr.on('data', (data) => {
 console.error(`Combine stderr: ${data}`);
 });

 ffmpegCombine.on('close', (code) => {
 console.log(`ffmpeg exited with code ${code}`);
 if (currentIndex >= playlist.length) {
 console.log('End of playlist');
 currentIndex = 0;
 }
 });
}




Locally I use nginx with rtmp module to manage multi-streams and combined into one to send to twitch


In NGINX it's my nginx.conf for module :


rtmp {
 server {
 listen 1935; # le port pour le protocole RTMP
 
 application live {
 live on; # active le streaming en direct
 record off; # désactive l'enregistrement du flux
 
 # définit l'endroit où les flux doivent être envoyés
 push rtmp://live.twitch.tv/app/liveKey;
 }
 
 application text {
 live on; # active le streaming en direct
 record off; # désactive l'enregistrement du flux
 }
 }
}



I have checked that the codecs, resolution and frame rate are the same for both streams. I am also overlaying the text stream on top of the video stream with the -filter_complex command, but I am not sure if it works correctly.


Does each stream have to have the same parameters ?


I would like to know if anyone has any idea what could be causing this problem and how to fix it. Should I use a different format for the output stream to Twitch ? Or is there another approach I should consider for layering a dynamic text stream over a video stream ?


Also, I'm wondering if I'm handling updating the text stream correctly when the video changes. Currently, I create a new text image with Canvas every time the video changes, then create a new ffmpeg process for the text stream. Is this the right approach, or is there a better way to handle this ?


Thanks in advance for any help or advice.