
Recherche avancée
Autres articles (105)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)
Sur d’autres sites (8724)
-
Problems using FFmpeg / libavfilter for adding overlay to grabbed frames
21 novembre 2024, par MichaelOn Windows with latest FFmpeg / libav (full build, non-free) a C/C++ app reads YUV420P frames from a frame grabber card.


A bitmap (BGR24) overlay image from file should be drawn on every frame for the first 20 seconds via libavfilter. First, the BGR24 overlay image becomes converted via format filter to YUV420P. Then the YUV420P frame from frame grabber and the YUV420P overlay frame are pushed into the overlay filter.


FFmpeg / libavfilter does not report any errors or warnings in console / log. Trying to get the filtered frame out of the graph via
av_buffersink_get_frame
results in an EAGAIN return code.

The frames from the frame grabber card are fine, they could become encoded or written to a .yuv file. The overlay frame itself is fine too.


This is the complete private code (prototype - no style, memory leaks, ...) :


#define __STDC_LIMIT_MACROS
#define __STDC_CONSTANT_MACROS

#include <cstdio>
#include <cstdint>
#include 

#include "../fgproto/include/SDL/SDL_video.h"
#include 

using namespace _DSHOWLIB_NAMESPACE;

#ifdef _WIN32
//Windows
extern "C" {
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#include "libswscale/swscale.h"
#include "libavdevice/avdevice.h"
#include "libavfilter/avfilter.h"
#include <libavutil></libavutil>log.h>
#include <libavutil></libavutil>mem.h>
#include "libavfilter/buffersink.h"
#include "libavfilter/buffersrc.h"
#include "libavutil/opt.h"
#include "libavutil/hwcontext_qsv.h"
#include "SDL/SDL.h"
};
#endif
#include <iostream>
#include <fstream>

void uSleep(double waitTimeInUs, LARGE_INTEGER frequency)
{
 LARGE_INTEGER startTime, currentTime;

 QueryPerformanceCounter(&startTime);

 if (waitTimeInUs > 16500.0)
 Sleep(1);

 do
 {
 YieldProcessor();
 //Sleep(0);
 QueryPerformanceCounter(&currentTime);
 }
 while (waitTimeInUs > (currentTime.QuadPart - startTime.QuadPart) * 1000000.0 / frequency.QuadPart);
}

void check_error(int ret)
{
 if (ret < 0)
 {
 char errbuf[128];
 int tmp = errno;
 av_strerror(ret, errbuf, sizeof(errbuf));
 std::cerr << "Error: " << errbuf << '\n';
 //exit(1);
 }
}

bool _isRunning = true;

void swap_uv_planes(AVFrame* frame)
{
 uint8_t* temp_plane = frame->data[1]; 
 frame->data[1] = frame->data[2]; 
 frame->data[2] = temp_plane; 
}

typedef struct
{
 const AVClass* avclass;
} MyFilterGraphContext;

static constexpr AVClass my_filter_graph_class = 
{
 .class_name = "MyFilterGraphContext",
 .item_name = av_default_item_name,
 .option = NULL,
 .version = LIBAVUTIL_VERSION_INT,
};

MyFilterGraphContext* init_log_context()
{
 MyFilterGraphContext* ctx = static_cast(av_mallocz(sizeof(*ctx)));

 if (!ctx)
 {
 av_log(nullptr, AV_LOG_ERROR, "Unable to allocate MyFilterGraphContext\n");
 return nullptr;
 }

 ctx->avclass = &my_filter_graph_class;
 return ctx;
}

int init_overlay_filter(AVFilterGraph** graph, AVFilterContext** src_ctx, AVFilterContext** overlay_src_ctx,
 AVFilterContext** sink_ctx)
{
 AVFilterGraph* filter_graph;
 AVFilterContext* buffersrc_ctx;
 AVFilterContext* overlay_buffersrc_ctx;
 AVFilterContext* buffersink_ctx;
 AVFilterContext* overlay_ctx;
 AVFilterContext* format_ctx;

 const AVFilter* buffersrc, * buffersink, * overlay_buffersrc, * overlay_filter, * format_filter;
 int ret;

 // Create the filter graph
 filter_graph = avfilter_graph_alloc();
 if (!filter_graph)
 {
 fprintf(stderr, "Unable to create filter graph.\n");
 return AVERROR(ENOMEM);
 }

 // Create buffer source filter for main video
 buffersrc = avfilter_get_by_name("buffer");
 if (!buffersrc)
 {
 fprintf(stderr, "Unable to find buffer filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Create buffer source filter for overlay image
 overlay_buffersrc = avfilter_get_by_name("buffer");
 if (!overlay_buffersrc)
 {
 fprintf(stderr, "Unable to find buffer filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Create buffer sink filter
 buffersink = avfilter_get_by_name("buffersink");
 if (!buffersink)
 {
 fprintf(stderr, "Unable to find buffersink filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Create overlay filter
 overlay_filter = avfilter_get_by_name("overlay");
 if (!overlay_filter)
 {
 fprintf(stderr, "Unable to find overlay filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Create format filter
 format_filter = avfilter_get_by_name("format");
 if (!format_filter)
 {
 fprintf(stderr, "Unable to find format filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Initialize the main video buffer source
 char args[512];

 // Initialize the overlay buffer source
 snprintf(args, sizeof(args), "video_size=165x165:pix_fmt=bgr24:time_base=1/25:pixel_aspect=1/1"); 

 ret = avfilter_graph_create_filter(&overlay_buffersrc_ctx, overlay_buffersrc, nullptr, args, nullptr,
 filter_graph);

 if (ret < 0)
 {
 fprintf(stderr, "Unable to create buffer source filter for overlay.\n");
 return ret;
 }

 snprintf(args, sizeof(args), "video_size=1920x1080:pix_fmt=yuv420p:time_base=1/25:pixel_aspect=1/1");

 ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, nullptr, args, nullptr, filter_graph);

 if (ret < 0)
 {
 fprintf(stderr, "Unable to create buffer source filter for main video.\n");
 return ret;
 }

 // Initialize the format filter to convert overlay image to yuv420p
 snprintf(args, sizeof(args), "pix_fmts=yuv420p");

 ret = avfilter_graph_create_filter(&format_ctx, format_filter, nullptr, args, nullptr, filter_graph);

 if (ret < 0)
 {
 fprintf(stderr, "Unable to create format filter.\n");
 return ret;
 }

 // Initialize the overlay filter
 ret = avfilter_graph_create_filter(&overlay_ctx, overlay_filter, nullptr, "W-w:H-h:enable='between(t,0,20)':format=yuv420", nullptr, filter_graph);
 if (ret < 0)
 {
 fprintf(stderr, "Unable to create overlay filter.\n");
 return ret;
 }

 // Initialize the buffer sink
 ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, nullptr, nullptr, nullptr, filter_graph);
 if (ret < 0)
 {
 fprintf(stderr, "Unable to create buffer sink filter.\n");
 return ret;
 }

 // Connect the filters
 ret = avfilter_link(overlay_buffersrc_ctx, 0, format_ctx, 0);

 if (ret >= 0)
 {
 ret = avfilter_link(buffersrc_ctx, 0, overlay_ctx, 0);
 }
 else
 {
 fprintf(stderr, "Unable to configure filter graph.\n");
 return ret;
 }


 if (ret >= 0)
 {
 ret = avfilter_link(format_ctx, 0, overlay_ctx, 1);
 }
 else
 {
 fprintf(stderr, "Unable to configure filter graph.\n");
 return ret;
 }

 if (ret >= 0)
 {
 if ((ret = avfilter_link(overlay_ctx, 0, buffersink_ctx, 0)) < 0)
 {
 fprintf(stderr, "Unable to link filter graph.\n");
 return ret;
 }
 }
 else
 {
 fprintf(stderr, "Unable to configure filter graph.\n");
 return ret;
 }

 MyFilterGraphContext* log_ctx = init_log_context();

 // Configure the filter graph
 if ((ret = avfilter_graph_config(filter_graph, log_ctx)) < 0)
 {
 fprintf(stderr, "Unable to configure filter graph.\n");
 return ret;
 }

 *graph = filter_graph;
 *src_ctx = buffersrc_ctx;
 *overlay_src_ctx = overlay_buffersrc_ctx;
 *sink_ctx = buffersink_ctx;

 return 0;
}

int main(int argc, char* argv[])
{
 unsigned int videoIndex = 0;

 avdevice_register_all();

 av_log_set_level(AV_LOG_TRACE);

 const AVInputFormat* pFrameGrabberInputFormat = av_find_input_format("dshow");

 constexpr int frameGrabberPixelWidth = 1920;
 constexpr int frameGrabberPixelHeight = 1080;
 constexpr int frameGrabberFrameRate = 25;
 constexpr AVPixelFormat frameGrabberPixelFormat = AV_PIX_FMT_YUV420P;

 char shortStringBuffer[32];

 AVDictionary* pFrameGrabberOptions = nullptr;

 _snprintf_s(shortStringBuffer, sizeof(shortStringBuffer), "%dx%d", frameGrabberPixelWidth, frameGrabberPixelHeight);
 av_dict_set(&pFrameGrabberOptions, "video_size", shortStringBuffer, 0);

 _snprintf_s(shortStringBuffer, sizeof(shortStringBuffer), "%d", frameGrabberFrameRate);

 av_dict_set(&pFrameGrabberOptions, "framerate", shortStringBuffer, 0);
 av_dict_set(&pFrameGrabberOptions, "pixel_format", "yuv420p", 0);
 av_dict_set(&pFrameGrabberOptions, "rtbufsize", "128M", 0);

 AVFormatContext* pFrameGrabberFormatContext = avformat_alloc_context();

 pFrameGrabberFormatContext->flags = AVFMT_FLAG_NOBUFFER | AVFMT_FLAG_FLUSH_PACKETS;

 if (avformat_open_input(&pFrameGrabberFormatContext, "video=MZ0380 PCI, Analog 01 Capture",
 pFrameGrabberInputFormat, &pFrameGrabberOptions) != 0)
 {
 std::cerr << "Couldn't open input stream." << '\n';
 return -1;
 }

 if (avformat_find_stream_info(pFrameGrabberFormatContext, nullptr) < 0)
 {
 std::cerr << "Couldn't find stream information." << '\n';
 return -1;
 }

 bool foundVideoStream = false;

 for (unsigned int loop_videoIndex = 0; loop_videoIndex < pFrameGrabberFormatContext->nb_streams; loop_videoIndex++)
 {
 if (pFrameGrabberFormatContext->streams[loop_videoIndex]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
 {
 videoIndex = loop_videoIndex;
 foundVideoStream = true;
 break;
 }
 }

 if (!foundVideoStream)
 {
 std::cerr << "Couldn't find a video stream." << '\n';
 return -1;
 }

 const AVCodec* pFrameGrabberCodec = avcodec_find_decoder(
 pFrameGrabberFormatContext->streams[videoIndex]->codecpar->codec_id);

 AVCodecContext* pFrameGrabberCodecContext = avcodec_alloc_context3(pFrameGrabberCodec);

 if (pFrameGrabberCodec == nullptr)
 {
 std::cerr << "Codec not found." << '\n';
 return -1;
 }

 pFrameGrabberCodecContext->pix_fmt = frameGrabberPixelFormat;
 pFrameGrabberCodecContext->width = frameGrabberPixelWidth;
 pFrameGrabberCodecContext->height = frameGrabberPixelHeight;

 int ret = avcodec_open2(pFrameGrabberCodecContext, pFrameGrabberCodec, nullptr);

 if (ret < 0)
 {
 std::cerr << "Could not open pVideoCodec." << '\n';
 return -1;
 }

 const char* outputFilePath = "c:\\temp\\output.mp4";
 constexpr int outputWidth = frameGrabberPixelWidth;
 constexpr int outputHeight = frameGrabberPixelHeight;
 constexpr int outputFrameRate = frameGrabberFrameRate;

 SwsContext* img_convert_ctx = sws_getContext(frameGrabberPixelWidth, frameGrabberPixelHeight,
 frameGrabberPixelFormat, outputWidth, outputHeight, AV_PIX_FMT_NV12,
 SWS_BICUBIC, nullptr, nullptr, nullptr);

 constexpr double frameTimeinUs = 1000000.0 / frameGrabberFrameRate;

 LARGE_INTEGER frequency;
 LARGE_INTEGER lastTime, currentTime;

 QueryPerformanceFrequency(&frequency);
 QueryPerformanceCounter(&lastTime);

 //SDL----------------------------

 if (SDL_Init(SDL_INIT_VIDEO | SDL_INIT_TIMER | SDL_INIT_EVENTS))
 {
 printf("Could not initialize SDL - %s\n", SDL_GetError());
 return -1;
 }

 SDL_Window* screen = SDL_CreateWindow("3P FrameGrabber SuperApp", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED,
 frameGrabberPixelWidth, frameGrabberPixelHeight,
 SDL_WINDOW_RESIZABLE | SDL_WINDOW_OPENGL);

 if (!screen)
 {
 printf("SDL: could not set video mode - exiting:%s\n", SDL_GetError());
 return -1;
 }

 SDL_Renderer* renderer = SDL_CreateRenderer(screen, -1, 0);

 if (!renderer)
 {
 printf("SDL: could not create renderer - exiting:%s\n", SDL_GetError());
 return -1;
 }

 SDL_Texture* texture = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_YV12, SDL_TEXTUREACCESS_STREAMING,
 frameGrabberPixelWidth, frameGrabberPixelHeight);

 if (!texture)
 {
 printf("SDL: could not create texture - exiting:%s\n", SDL_GetError());
 return -1;
 }

 SDL_Event event;

 //SDL End------------------------

 const AVCodec* pVideoCodec = avcodec_find_encoder_by_name("h264_qsv");

 if (!pVideoCodec)
 {
 std::cerr << "Codec not found" << '\n';
 return 1;
 }

 AVCodecContext* pVideoCodecContext = avcodec_alloc_context3(pVideoCodec);

 if (!pVideoCodecContext)
 {
 std::cerr << "Could not allocate video pVideoCodec context" << '\n';
 return 1;
 }

 AVBufferRef* pHardwareDeviceContextRef = nullptr;

 ret = av_hwdevice_ctx_create(&pHardwareDeviceContextRef, AV_HWDEVICE_TYPE_QSV,
 "PCI\\VEN_8086&DEV_5912&SUBSYS_310217AA&REV_04\\3&11583659&0&10", nullptr, 0);
 check_error(ret);

 pVideoCodecContext->bit_rate = static_cast(outputWidth * outputHeight) * 2;
 pVideoCodecContext->width = outputWidth;
 pVideoCodecContext->height = outputHeight;
 pVideoCodecContext->framerate = {outputFrameRate, 1};
 pVideoCodecContext->time_base = {1, outputFrameRate};
 pVideoCodecContext->pix_fmt = AV_PIX_FMT_QSV;
 pVideoCodecContext->max_b_frames = 0;

 AVBufferRef* pHardwareFramesContextRef = av_hwframe_ctx_alloc(pHardwareDeviceContextRef);

 AVHWFramesContext* pHardwareFramesContext = reinterpret_cast(pHardwareFramesContextRef->data);

 pHardwareFramesContext->format = AV_PIX_FMT_QSV;
 pHardwareFramesContext->sw_format = AV_PIX_FMT_NV12;
 pHardwareFramesContext->width = outputWidth;
 pHardwareFramesContext->height = outputHeight;
 pHardwareFramesContext->initial_pool_size = 20;

 ret = av_hwframe_ctx_init(pHardwareFramesContextRef);
 check_error(ret);

 pVideoCodecContext->hw_device_ctx = nullptr;
 pVideoCodecContext->hw_frames_ctx = av_buffer_ref(pHardwareFramesContextRef);

 ret = avcodec_open2(pVideoCodecContext, pVideoCodec, nullptr); //&pVideoOptionsDict);
 check_error(ret);

 AVFormatContext* pVideoFormatContext = nullptr;

 avformat_alloc_output_context2(&pVideoFormatContext, nullptr, nullptr, outputFilePath);

 if (!pVideoFormatContext)
 {
 std::cerr << "Could not create output context" << '\n';
 return 1;
 }

 const AVOutputFormat* pVideoOutputFormat = pVideoFormatContext->oformat;

 if (pVideoFormatContext->oformat->flags & AVFMT_GLOBALHEADER)
 {
 pVideoCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
 }

 const AVStream* pVideoStream = avformat_new_stream(pVideoFormatContext, pVideoCodec);

 if (!pVideoStream)
 {
 std::cerr << "Could not allocate stream" << '\n';
 return 1;
 }

 ret = avcodec_parameters_from_context(pVideoStream->codecpar, pVideoCodecContext);

 check_error(ret);

 if (!(pVideoOutputFormat->flags & AVFMT_NOFILE))
 {
 ret = avio_open(&pVideoFormatContext->pb, outputFilePath, AVIO_FLAG_WRITE);
 check_error(ret);
 }

 ret = avformat_write_header(pVideoFormatContext, nullptr);

 check_error(ret);

 AVFrame* pHardwareFrame = av_frame_alloc();

 if (av_hwframe_get_buffer(pVideoCodecContext->hw_frames_ctx, pHardwareFrame, 0) < 0)
 {
 std::cerr << "Error allocating a hw frame" << '\n';
 return -1;
 }

 AVFrame* pFrameGrabberFrame = av_frame_alloc();
 AVPacket* pFrameGrabberPacket = av_packet_alloc();

 AVPacket* pVideoPacket = av_packet_alloc();
 AVFrame* pVideoFrame = av_frame_alloc();

 AVFrame* pSwappedFrame = av_frame_alloc();
 av_frame_get_buffer(pSwappedFrame, 32);

 INT64 frameCount = 0;

 pFrameGrabberCodecContext->time_base = {1, frameGrabberFrameRate};

 AVFilterContext* buffersrc_ctx = nullptr;
 AVFilterContext* buffersink_ctx = nullptr;
 AVFilterContext* overlay_src_ctx = nullptr;
 AVFilterGraph* filter_graph = nullptr;

 if ((ret = init_overlay_filter(&filter_graph, &buffersrc_ctx, &overlay_src_ctx, &buffersink_ctx)) < 0)
 {
 return ret;
 }

 // Load overlay image
 AVFormatContext* overlay_fmt_ctx = nullptr;
 AVCodecContext* overlay_codec_ctx = nullptr;
 const AVCodec* overlay_codec = nullptr;
 AVFrame* overlay_frame = nullptr;
 AVDictionary* overlay_options = nullptr;

 const char* overlay_image_filename = "c:\\temp\\overlay.bmp";

 av_dict_set(&overlay_options, "video_size", "165x165", 0);
 av_dict_set(&overlay_options, "pixel_format", "bgr24", 0);

 if ((ret = avformat_open_input(&overlay_fmt_ctx, overlay_image_filename, nullptr, &overlay_options)) < 0)
 {
 return ret;
 }

 if ((ret = avformat_find_stream_info(overlay_fmt_ctx, nullptr)) < 0)
 {
 return ret;
 }

 int overlay_video_stream_index = -1;

 for (int i = 0; i < overlay_fmt_ctx->nb_streams; i++)
 {
 if (overlay_fmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
 {
 overlay_video_stream_index = i;
 break;
 }
 }

 if (overlay_video_stream_index == -1)
 {
 return -1;
 }

 overlay_codec = avcodec_find_decoder(overlay_fmt_ctx->streams[overlay_video_stream_index]->codecpar->codec_id);

 if (!overlay_codec)
 {
 fprintf(stderr, "Overlay codec not found.\n");
 return -1;
 }

 overlay_codec_ctx = avcodec_alloc_context3(overlay_codec);

 if (!overlay_codec_ctx)
 {
 fprintf(stderr, "Could not allocate overlay codec context.\n");
 return AVERROR(ENOMEM);
 }

 avcodec_parameters_to_context(overlay_codec_ctx, overlay_fmt_ctx->streams[overlay_video_stream_index]->codecpar);

 if ((ret = avcodec_open2(overlay_codec_ctx, overlay_codec, nullptr)) < 0)
 {
 return ret;
 }

 overlay_frame = av_frame_alloc();

 if (!overlay_frame)
 {
 fprintf(stderr, "Could not allocate overlay frame.\n");
 return AVERROR(ENOMEM);
 }

 AVPacket* overlay_packet = av_packet_alloc();

 // Read frames from the file
 while (av_read_frame(overlay_fmt_ctx, overlay_packet) >= 0)
 {
 if (overlay_packet->stream_index == overlay_video_stream_index)
 {
 ret = avcodec_send_packet(overlay_codec_ctx, overlay_packet);

 if (ret < 0)
 {
 break;
 }

 ret = avcodec_receive_frame(overlay_codec_ctx, overlay_frame);
 if (ret >= 0)
 {
 
 break; // We only need the first frame for the overlay
 }

 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 {
 continue;
 }

 break;
 }

 av_packet_unref(overlay_packet);
 }

 av_packet_unref(overlay_packet);

 while (_isRunning)
 {
 while (SDL_PollEvent(&event) != 0)
 {
 switch (event.type)
 {
 case SDL_QUIT:
 _isRunning = false;
 break;
 case SDL_KEYDOWN:
 if (event.key.keysym.sym == SDLK_ESCAPE)
 _isRunning = false;
 break;
 default: ;
 }
 }

 if (av_read_frame(pFrameGrabberFormatContext, pFrameGrabberPacket) == 0)
 {
 if (pFrameGrabberPacket->stream_index == videoIndex)
 {
 ret = avcodec_send_packet(pFrameGrabberCodecContext, pFrameGrabberPacket);

 if (ret < 0)
 {
 std::cerr << "Error sending a packet for decoding!" << '\n';
 return -1;
 }

 ret = avcodec_receive_frame(pFrameGrabberCodecContext, pFrameGrabberFrame);

 if (ret != 0)
 {
 std::cerr << "Receiving frame failed!" << '\n';
 return -1;
 }

 if (ret == AVERROR(EAGAIN) || ret == AVERROR(AVERROR_EOF))
 {
 std::cout << "End of stream detected. Exiting now." << '\n';
 return 0;
 }

 if (ret != 0)
 {
 std::cerr << "Decode Error!" << '\n';
 return -1;
 }

 // Feed the frame into the filter graph
 if (av_buffersrc_add_frame_flags(buffersrc_ctx, pFrameGrabberFrame, AV_BUFFERSRC_FLAG_KEEP_REF) < 0)
 {
 fprintf(stderr, "Error while feeding the filtergraph\n");
 break;
 }

 // Push the overlay frame to the overlay_src_ctx
 ret = av_buffersrc_add_frame_flags(overlay_src_ctx, overlay_frame, AV_BUFFERSRC_FLAG_KEEP_REF);
 if (ret < 0)
 {
 fprintf(stderr, "Error while feeding the filtergraph\n");
 break;
 } 

 // Pull filtered frame from the filter graph
 AVFrame* filtered_frame = av_frame_alloc();

 ret = av_buffersink_get_frame(buffersink_ctx, filtered_frame);

 if (ret < 0)
 {
 check_error(ret);
 }

 QueryPerformanceCounter(&currentTime);

 double elapsedTime = (currentTime.QuadPart - lastTime.QuadPart) * 1000000.0 / frequency.QuadPart;

 if (elapsedTime > 0.0 && elapsedTime < frameTimeinUs)
 {
 uSleep(frameTimeinUs - elapsedTime, frequency);
 }

 SDL_UpdateTexture(texture, nullptr, filtered_frame->data[0], filtered_frame->linesize[0]);
 SDL_RenderClear(renderer);
 SDL_RenderCopy(renderer, texture, nullptr, nullptr);
 SDL_RenderPresent(renderer);

 QueryPerformanceCounter(&lastTime);

 swap_uv_planes(filtered_frame);

 ret = sws_scale_frame(img_convert_ctx, pVideoFrame, filtered_frame);

 if (ret < 0)
 {
 std::cerr << "Scaling frame for Intel QS Encoder did fail!" << '\n';
 return -1;
 }

 if (av_hwframe_transfer_data(pHardwareFrame, pVideoFrame, 0) < 0)
 {
 std::cerr << "Error transferring frame data to hw frame!" << '\n';
 return -1;
 }

 pHardwareFrame->pts = frameCount++;

 ret = avcodec_send_frame(pVideoCodecContext, pHardwareFrame);

 if (ret < 0)
 {
 std::cerr << "Error sending a frame for encoding" << '\n';
 check_error(ret);
 }

 av_packet_unref(pVideoPacket);

 while (ret >= 0)
 {
 ret = avcodec_receive_packet(pVideoCodecContext, pVideoPacket);

 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 {
 break;
 }

 if (ret < 0)
 {
 std::cerr << "Error during encoding" << '\n';
 return 1;
 }

 av_packet_rescale_ts(pVideoPacket, pVideoCodecContext->time_base, pVideoStream->time_base);

 pVideoPacket->stream_index = pVideoStream->index;

 ret = av_interleaved_write_frame(pVideoFormatContext, pVideoPacket);

 check_error(ret);

 av_packet_unref(pVideoPacket);
 }

 av_packet_unref(pFrameGrabberPacket);
 av_frame_free(&filtered_frame);
 }
 }
 }

 av_write_trailer(pVideoFormatContext);
 av_buffer_unref(&pHardwareDeviceContextRef);
 avcodec_free_context(&pVideoCodecContext);
 avio_closep(&pVideoFormatContext->pb);
 avformat_free_context(pVideoFormatContext);
 av_packet_free(&pVideoPacket);

 avcodec_free_context(&pFrameGrabberCodecContext);
 av_frame_free(&pFrameGrabberFrame);
 av_packet_free(&pFrameGrabberPacket);
 avformat_close_input(&pFrameGrabberFormatContext);

 return 0;
}

</fstream></iostream></cstdint></cstdio>


The console / log output running the code :


[in @ 00000288ee494f40] Setting 'video_size' to value '1920x1080'
[in @ 00000288ee494f40] Setting 'pix_fmt' to value 'yuv420p'
[in @ 00000288ee494f40] Setting 'time_base' to value '1/25'
[in @ 00000288ee494f40] Setting 'pixel_aspect' to value '1/1'
[in @ 00000288ee494f40] w:1920 h:1080 pixfmt:yuv420p tb:1/25 fr:0/1 sar:1/1 csp:unknown range:unknown
[overlay_in @ 00000288ff1013c0] Setting 'video_size' to value '165x165'
[overlay_in @ 00000288ff1013c0] Setting 'pix_fmt' to value 'bgr24'
[overlay_in @ 00000288ff1013c0] Setting 'time_base' to value '1/25'
[overlay_in @ 00000288ff1013c0] Setting 'pixel_aspect' to value '1/1'
[overlay_in @ 00000288ff1013c0] w:165 h:165 pixfmt:bgr24 tb:1/25 fr:0/1 sar:1/1 csp:unknown range:unknown
[format @ 00000288ff1015c0] Setting 'pix_fmts' to value 'yuv420p'
[overlay @ 00000288ff101880] Setting 'x' to value 'W-w'
[overlay @ 00000288ff101880] Setting 'y' to value 'H-h'
[overlay @ 00000288ff101880] Setting 'enable' to value 'between(t,0,20)'
[overlay @ 00000288ff101880] Setting 'format' to value 'yuv420'
[auto_scale_0 @ 00000288ff101ec0] w:iw h:ih flags:'' interl:0
[format @ 00000288ff1015c0] auto-inserting filter 'auto_scale_0' between the filter 'overlay_in' and the filter 'format'
[auto_scale_1 @ 00000288ee4a4cc0] w:iw h:ih flags:'' interl:0
[overlay @ 00000288ff101880] auto-inserting filter 'auto_scale_1' between the filter 'format' and the filter 'overlay'
[AVFilterGraph @ 00000288ee495c80] query_formats: 5 queried, 6 merged, 6 already done, 0 delayed
[auto_scale_0 @ 00000288ff101ec0] w:165 h:165 fmt:bgr24 csp:gbr range:pc sar:1/1 -> w:165 h:165 fmt:yuv420p csp:unknown range:unknown sar:1/1 flags:0x00000004
[auto_scale_1 @ 00000288ee4a4cc0] w:165 h:165 fmt:yuv420p csp:unknown range:unknown sar:1/1 -> w:165 h:165 fmt:yuva420p csp:unknown range:unknown sar:1/1 flags:0x00000004
[overlay @ 00000288ff101880] main w:1920 h:1080 fmt:yuv420p overlay w:165 h:165 fmt:yuva420p
[overlay @ 00000288ff101880] [framesync @ 00000288ff1019a8] Selected 1/25 time base
[overlay @ 00000288ff101880] [framesync @ 00000288ff1019a8] Sync level 2



I tried to change the index / order of how the two different frames become pushed into the filter graph. Once I got a frame out of the graph but with the dimensions of the overlay image, not with the dimensions of the grabbed frame from the grabber card. So I suppose I am doing something wrong building up the filter graph.


To verify that the FFmpeg build contains all necessary modules I ran that procedure via FFmpeg executable in console and it worked and the result was as expected.


The command-line producing the expected output is following :


ffmpeg -f dshow -i video="MZ0380 PCI, Analog 01 Capture" -video_size 1920x1080 -framerate 25 -pixel_format yuv420p -loglevel debug -i "C:\temp\overlay.bmp" -filter_complex "[0:v][1:v] overlay=W-w:H-h:enable='between(t,0,20)'" -pix_fmt yuv420p -c:a copy output.mp4



-
Problems using libavfilter for adding overlay to frames
12 novembre 2024, par Michael WernerOn Windows 11 OS with latest libav (full build) a C/C++ app reads YUV420P frames from a frame grabber card.


I want to draw a bitmap (BGR24) overlay image from file on every frame via libavfilter. First I convert the BGR24 overlay image via format filter to YUV420P. Then I feed the YUV420P frame from frame grabber and the YUV420P overlay into the overlay filter.


Everything seems to be fine but when I try to get the frame out of the filter graph I always get an "Resource is temporary not available" (EAGAIN) return code, independent on how many frames I put into the graph.


The frames from the frame grabber card are fine, I could encode them or write them to a .yuv file. The overlay frame looks fine too.


My current initialization code looks like below. It does not report any errors or warnings but when I try to get the filtered frame out of the graph via
av_buffersink_get_frame
I always get anEAGAIN
return code.

Here is my current initialization code :


int init_overlay_filter(AVFilterGraph** graph, AVFilterContext** src_ctx, AVFilterContext** overlay_src_ctx,
 AVFilterContext** sink_ctx)
{
 AVFilterGraph* filter_graph;
 AVFilterContext* buffersrc_ctx;
 AVFilterContext* overlay_buffersrc_ctx;
 AVFilterContext* buffersink_ctx;
 AVFilterContext* overlay_ctx;
 AVFilterContext* format_ctx;
 const AVFilter *buffersrc, *buffersink, *overlay_buffersrc, *overlay_filter, *format_filter;
 int ret;

 // Create the filter graph
 filter_graph = avfilter_graph_alloc();
 if (!filter_graph)
 {
 fprintf(stderr, "Unable to create filter graph.\n");
 return AVERROR(ENOMEM);
 }

 // Create buffer source filter for main video
 buffersrc = avfilter_get_by_name("buffer");
 if (!buffersrc)
 {
 fprintf(stderr, "Unable to find buffer filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Create buffer source filter for overlay image
 overlay_buffersrc = avfilter_get_by_name("buffer");
 if (!overlay_buffersrc)
 {
 fprintf(stderr, "Unable to find buffer filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Create buffer sink filter
 buffersink = avfilter_get_by_name("buffersink");
 if (!buffersink)
 {
 fprintf(stderr, "Unable to find buffersink filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Create overlay filter
 overlay_filter = avfilter_get_by_name("overlay");
 if (!overlay_filter)
 {
 fprintf(stderr, "Unable to find overlay filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Create format filter
 format_filter = avfilter_get_by_name("format");
 if (!format_filter) 
 {
 fprintf(stderr, "Unable to find format filter.\n");
 return AVERROR_FILTER_NOT_FOUND;
 }

 // Initialize the main video buffer source
 char args[512];
 snprintf(args, sizeof(args),
 "video_size=1920x1080:pix_fmt=yuv420p:time_base=1/25:pixel_aspect=1/1");
 ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in", args, NULL, filter_graph);
 if (ret < 0)
 {
 fprintf(stderr, "Unable to create buffer source filter for main video.\n");
 return ret;
 }

 // Initialize the overlay buffer source
 snprintf(args, sizeof(args),
 "video_size=165x165:pix_fmt=bgr24:time_base=1/25:pixel_aspect=1/1");
 ret = avfilter_graph_create_filter(&overlay_buffersrc_ctx, overlay_buffersrc, "overlay_in", args, NULL,
 filter_graph);
 if (ret < 0)
 {
 fprintf(stderr, "Unable to create buffer source filter for overlay.\n");
 return ret;
 }

 // Initialize the format filter to convert overlay image to yuv420p
 snprintf(args, sizeof(args), "pix_fmts=yuv420p");
 ret = avfilter_graph_create_filter(&format_ctx, format_filter, "format", args, NULL, filter_graph);

 if (ret < 0) 
 {
 fprintf(stderr, "Unable to create format filter.\n");
 return ret;
 }

 // Initialize the buffer sink
 ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out", NULL, NULL, filter_graph);
 if (ret < 0)
 {
 fprintf(stderr, "Unable to create buffer sink filter.\n");
 return ret;
 }

 // Initialize the overlay filter
 ret = avfilter_graph_create_filter(&overlay_ctx, overlay_filter, "overlay", "W-w:H-h:enable='between(t,0,20)':format=yuv420", NULL, filter_graph);
 if (ret < 0)
 {
 fprintf(stderr, "Unable to create overlay filter.\n");
 return ret;
 }

 // Connect the filters
 ret = avfilter_link(overlay_buffersrc_ctx, 0, format_ctx, 0);

 if (ret >= 0)
 {
 ret = avfilter_link(buffersrc_ctx, 0, overlay_ctx, 0);
 }
 else
 {
 fprintf(stderr, "Unable to configure filter graph.\n");
 return ret;
 }


 if (ret >= 0) 
 {
 ret = avfilter_link(format_ctx, 0, overlay_ctx, 1);
 }
 else
 {
 fprintf(stderr, "Unable to configure filter graph.\n");
 return ret;
 }

 if (ret >= 0) 
 {
 if ((ret = avfilter_link(overlay_ctx, 0, buffersink_ctx, 0)) < 0)
 {
 fprintf(stderr, "Unable to link filter graph.\n");
 return ret;
 }
 }
 else
 {
 fprintf(stderr, "Unable to configure filter graph.\n");
 return ret;
 }

 // Configure the filter graph
 if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
 {
 fprintf(stderr, "Unable to configure filter graph.\n");
 return ret;
 }

 *graph = filter_graph;
 *src_ctx = buffersrc_ctx;
 *overlay_src_ctx = overlay_buffersrc_ctx;
 *sink_ctx = buffersink_ctx;

 return 0;
}



Feeding the filter graph is done this way :


av_buffersrc_add_frame_flags(buffersrc_ctx, pFrameGrabberFrame, AV_BUFFERSRC_FLAG_KEEP_REF)
av_buffersink_get_frame(buffersink_ctx, filtered_frame)



av_buffersink_get_frame
returns alwaysEAGAIN
, no matter how many frames I feed into the graph. The frames (from framegrabber and the overlay frame) itself are looking fine.

I did set libav logging level to maximum but I do not see any warnings or errors or helpful, related information in the log.


Here the log output related to the filter configuration :


[in @ 00000288ee494f40] Setting 'video_size' to value '1920x1080'
[in @ 00000288ee494f40] Setting 'pix_fmt' to value 'yuv420p'
[in @ 00000288ee494f40] Setting 'time_base' to value '1/25'
[in @ 00000288ee494f40] Setting 'pixel_aspect' to value '1/1'
[in @ 00000288ee494f40] w:1920 h:1080 pixfmt:yuv420p tb:1/25 fr:0/1 sar:1/1 csp:unknown range:unknown
[overlay_in @ 00000288ff1013c0] Setting 'video_size' to value '165x165'
[overlay_in @ 00000288ff1013c0] Setting 'pix_fmt' to value 'bgr24'
[overlay_in @ 00000288ff1013c0] Setting 'time_base' to value '1/25'
[overlay_in @ 00000288ff1013c0] Setting 'pixel_aspect' to value '1/1'
[overlay_in @ 00000288ff1013c0] w:165 h:165 pixfmt:bgr24 tb:1/25 fr:0/1 sar:1/1 csp:unknown range:unknown
[format @ 00000288ff1015c0] Setting 'pix_fmts' to value 'yuv420p'
[overlay @ 00000288ff101880] Setting 'x' to value 'W-w'
[overlay @ 00000288ff101880] Setting 'y' to value 'H-h'
[overlay @ 00000288ff101880] Setting 'enable' to value 'between(t,0,20)'
[overlay @ 00000288ff101880] Setting 'format' to value 'yuv420'
[auto_scale_0 @ 00000288ff101ec0] w:iw h:ih flags:'' interl:0
[format @ 00000288ff1015c0] auto-inserting filter 'auto_scale_0' between the filter 'overlay_in' and the filter 'format'
[auto_scale_1 @ 00000288ee4a4cc0] w:iw h:ih flags:'' interl:0
[overlay @ 00000288ff101880] auto-inserting filter 'auto_scale_1' between the filter 'format' and the filter 'overlay'
[AVFilterGraph @ 00000288ee495c80] query_formats: 5 queried, 6 merged, 6 already done, 0 delayed
[auto_scale_0 @ 00000288ff101ec0] w:165 h:165 fmt:bgr24 csp:gbr range:pc sar:1/1 -> w:165 h:165 fmt:yuv420p csp:unknown range:unknown sar:1/1 flags:0x00000004
[auto_scale_1 @ 00000288ee4a4cc0] w:165 h:165 fmt:yuv420p csp:unknown range:unknown sar:1/1 -> w:165 h:165 fmt:yuva420p csp:unknown range:unknown sar:1/1 flags:0x00000004
[overlay @ 00000288ff101880] main w:1920 h:1080 fmt:yuv420p overlay w:165 h:165 fmt:yuva420p
[overlay @ 00000288ff101880] [framesync @ 00000288ff1019a8] Selected 1/25 time base
[overlay @ 00000288ff101880] [framesync @ 00000288ff1019a8] Sync level 2



-
Frames took with ELP camera has unknown pixel format at FHD ?
11 novembre 2024, par Marcel KoperaI'm trying to take a one frame ever x seconds from my usb camera. Name of the camera is : ELP-USBFHD06H-SFV(5-50).
Code is not 100% done yet, but I'm using it this way right now ↓ (
shot
fn is called frommain.py
in a loop)


import cv2
import subprocess

from time import sleep
from collections import namedtuple

from errors import *

class Camera:
 def __init__(self, cam_index, res_width, res_height, pic_format, day_time_exposure_ms, night_time_exposure_ms):
 Resolution = namedtuple("resolution", ["width", "height"])
 self.manual_mode(True)

 self.cam_index = cam_index
 self.camera_resolution = Resolution(res_width, res_height)
 self.picture_format = pic_format
 self.day_time_exposure_ms = day_time_exposure_ms
 self.night_time_exposure_ms = night_time_exposure_ms

 self.started: bool = False
 self.night_mode = False

 self.cap = cv2.VideoCapture(self.cam_index, cv2.CAP_V4L2)
 self.cap.set(cv2.CAP_PROP_FRAME_WIDTH, self.camera_resolution.width)
 self.cap.set(cv2.CAP_PROP_FRAME_HEIGHT, self.camera_resolution.height)
 self.cap.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*self.picture_format))

 

 def start(self):
 sleep(1)
 if not self.cap.isOpened():
 return CameraCupError()

 self.set_exposure_time(self.day_time_exposure_ms)
 self.set_brightness(0)
 sleep(0.1)
 
 self.started = True



 def shot(self, picture_name, is_night):
 if not self.started:
 return InitializationError()

 self.configure_mode(is_night)

 # Clear buffer
 for _ in range(5):
 ret, _ = self.cap.read()

 ret, frame = self.cap.read()

 sleep(0.1)

 if ret:
 print(picture_name)
 cv2.imwrite(picture_name, frame)
 return True

 else:
 print("No photo")
 return False


 
 def release(self):
 self.set_exposure_time(156)
 self.set_brightness(0)
 self.manual_mode(False)
 self.cap.release()



 def manual_mode(self, switch: bool):
 if switch:
 subprocess.run(["v4l2-ctl", "--set-ctrl=auto_exposure=1"])
 else:
 subprocess.run(["v4l2-ctl", "--set-ctrl=auto_exposure=3"])
 sleep(1)

 
 
 def configure_mode(self, is_night):
 if is_night == self.night_mode:
 return

 if is_night:
 self.night_mode = is_night
 self.set_exposure_time(self.night_time_exposure_ms)
 self.set_brightness(64)
 else:
 self.night_mode = is_night
 self.set_exposure_time(self.day_time_exposure_ms)
 self.set_brightness(0)
 sleep(0.1)



 def set_exposure_time(self, ms: int):
 ms = int(ms)
 default_val = 156

 if ms < 1 or ms > 5000:
 ms = default_val

 self.cap.set(cv2.CAP_PROP_EXPOSURE, ms)



 def set_brightness(self, value: int):
 value = int(value)
 default_val = 0

 if value < -64 or value > 64:
 value = default_val

 self.cap.set(cv2.CAP_PROP_BRIGHTNESS, value)



Here are settings for the camera (yaml file)


camera:
 camera_index: 0
 res_width: 1920
 res_height: 1080
 picture_format: "MJPG"
 day_time_exposure_ms: 5
 night_time_exposure_ms: 5000
 photos_format: "jpg"




I do some configs like set manual mode for the camera, change exposure/brightness and saving frame.
Also the camera is probably catching the frames to the buffer (it is not saving latest frame in real time : it's more laggish), so I have to clear buffer every time. like this


# Clear buffer from old frames
 for _ in range(5):
 ret, _ = self.cap.read()
 
 # Get a new frame
 ret, frame = self.cap.read()



What I really don't like, but I could find a better way (tldr : setting buffer for 1 frame doesn't work on my camera).


Frames saved this method looks good with 1920x1080 resolution. BUT when I try to run
ffmpeg
command to make a timelapse from savedjpg
file like this

ffmpeg -framerate 20 -pattern_type glob -i "*.jpg" -c:v libx264 output.mp4



I got an error like this one


[image2 @ 0x555609c45240] Could not open file : 08:59:20.jpg
[image2 @ 0x555609c45240] Could not find codec parameters for stream 0 (Video: mjpeg, none(bt470bg/unknown/unknown)): unspecified size
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
Input #0, image2, from '*.jpg':
 Duration: 00:00:00.05, start: 0.000000, bitrate: N/A
 Stream #0:0: Video: mjpeg, none(bt470bg/unknown/unknown), 20 fps, 20 tbr, 20 tbn
Output #0, mp4, to 'output.mp4':
Output file #0 does not contain any stream



Also when I try to copy the files from Linux to Windows I get some weird copy failing error and option to skip the picture. But even when I press the skip button, the picture is copied and can be opened. I'm not sure what is wrong with the format, but the camera is supporting MPEG at 1920x1080.


>>> v4l2-ctl --all

Driver Info:
 Driver name : uvcvideo
 Card type : H264 USB Camera: USB Camera
 Bus info : usb-xhci-hcd.1-1
 Driver version : 6.6.51
 Capabilities : 0x84a00001
 Video Capture
 Metadata Capture
 Streaming
 Extended Pix Format
 Device Capabilities
 Device Caps : 0x04200001
 Video Capture
 Streaming
 Extended Pix Format
Media Driver Info:
 Driver name : uvcvideo
 Model : H264 USB Camera: USB Camera
 Serial : 2020032801
 Bus info : usb-xhci-hcd.1-1
 Media version : 6.6.51
 Hardware revision: 0x00000100 (256)
 Driver version : 6.6.51
Interface Info:
 ID : 0x03000002
 Type : V4L Video
Entity Info:
 ID : 0x00000001 (1)
 Name : H264 USB Camera: USB Camera
 Function : V4L2 I/O
 Flags : default
 Pad 0x0100000d : 0: Sink
 Link 0x0200001a: from remote pad 0x1000010 of entity 'Extension 4' (Video Pixel Formatter): Data, Enabled, Immutable
Priority: 2
Video input : 0 (Camera 1: ok)
Format Video Capture:
 Width/Height : 1920/1080
 Pixel Format : 'MJPG' (Motion-JPEG)
 Field : None
 Bytes per Line : 0
 Size Image : 4147789
 Colorspace : sRGB
 Transfer Function : Default (maps to sRGB)
 YCbCr/HSV Encoding: Default (maps to ITU-R 601)
 Quantization : Default (maps to Full Range)
 Flags :
Crop Capability Video Capture:
 Bounds : Left 0, Top 0, Width 1920, Height 1080
 Default : Left 0, Top 0, Width 1920, Height 1080
 Pixel Aspect: 1/1
Selection Video Capture: crop_default, Left 0, Top 0, Width 1920, Height 1080, Flags:
Selection Video Capture: crop_bounds, Left 0, Top 0, Width 1920, Height 1080, Flags:
Streaming Parameters Video Capture:
 Capabilities : timeperframe
 Frames per second: 15.000 (15/1)
 Read buffers : 0

User Controls

 brightness 0x00980900 (int) : min=-64 max=64 step=1 default=0 value=64
 contrast 0x00980901 (int) : min=0 max=64 step=1 default=32 value=32
 saturation 0x00980902 (int) : min=0 max=128 step=1 default=56 value=56
 hue 0x00980903 (int) : min=-40 max=40 step=1 default=0 value=0
 white_balance_automatic 0x0098090c (bool) : default=1 value=1
 gamma 0x00980910 (int) : min=72 max=500 step=1 default=100 value=100
 gain 0x00980913 (int) : min=0 max=100 step=1 default=0 value=0
 power_line_frequency 0x00980918 (menu) : min=0 max=2 default=1 value=1 (50 Hz)
 0: Disabled
 1: 50 Hz
 2: 60 Hz
 white_balance_temperature 0x0098091a (int) : min=2800 max=6500 step=1 default=4600 value=4600 flags=inactive
 sharpness 0x0098091b (int) : min=0 max=6 step=1 default=3 value=3
 backlight_compensation 0x0098091c (int) : min=0 max=2 step=1 default=1 value=1

Camera Controls

 auto_exposure 0x009a0901 (menu) : min=0 max=3 default=3 value=1 (Manual Mode)
 1: Manual Mode
 3: Aperture Priority Mode
 exposure_time_absolute 0x009a0902 (int) : min=1 max=5000 step=1 default=156 value=5000
 exposure_dynamic_framerate 0x009a0903 (bool) : default=0 value=0



I also tried to save the picture using
ffmpeg
in a case something is not right withopencv
like this :

ffmpeg -f v4l2 -framerate 30 -video_size 1920x1080 -i /dev/video0 -c:v libx264 -preset fast -crf 23 -t 00:01:00 output.mp4




It is saving the picture but also changing its format


[video4linux2,v4l2 @ 0x555659ed92b0] The V4L2 driver changed the video from 1920x1080 to 800x600
[video4linux2,v4l2 @ 0x555659ed92b0] The driver changed the time per frame from 1/30 to 1/15



But the format looks right when set it back to FHD using
v4l2



>>> v4l2-ctl --device=/dev/video0 --set-fmt-video=width=1920,height=1080,pixelformat=MJPG
>>> v4l2-ctl --get-fmt-video

Format Video Capture:
 Width/Height : 1920/1080
 Pixel Format : 'MJPG' (Motion-JPEG)
 Field : None
 Bytes per Line : 0
 Size Image : 4147789
 Colorspace : sRGB
 Transfer Function : Default (maps to sRGB)
 YCbCr/HSV Encoding: Default (maps to ITU-R 601)
 Quantization : Default (maps to Full Range)
 Flags :



I'm not sure what could be wrong with the format/camera and I don't think I have enough information to figure it out.


I tried to use
ffmpeg
instead ofopencv
and also change a few settings inopencv's cup
config.