
Recherche avancée
Médias (1)
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (71)
-
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...) -
Récupération d’informations sur le site maître à l’installation d’une instance
26 novembre 2010, parUtilité
Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 is the first MediaSPIP stable release.
Its official release date is June 21, 2013 and is announced here.
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)
Sur d’autres sites (4811)
-
GDPR Compliance and Personal Data : The Ultimate Guide
22 septembre 2023, par Erin — GDPR -
FFmpeg C++ API : Using HW acceleration (VAAPI) to transcode video coming from a webcam [closed]
16 avril 2024, par nicohI'm actually trying to use HW acceleration with the FFmpeg C++ API in order to transcode the video coming from a webcam (which may vary from one config to another) into a given output format (i.e : converting the video stream coming from the webcam in MJPEG to H264 so that it can be written into a MP4 file).


I already succeeded to achieve this by transferring the AVFrame output by the HW decoder from GPU to CPU, then transfer this to the HW encoder input (so from CPU to GPU).
This is not so optimized and on top of that, for the given above config (MJPEG => H264), I cannot provide the output of the decoder as an input for the encoder as the MJPEG HW decoder wants to output in RGBA pixel format, and the H264 encoder wants NV12. So I have to perform pixel format conversion on CPU side.


That's why I would like to connect the output of the HW video decoder directly to the input of the HW encoder (inside the GPU).
To do this, I followed this example given by FFmpeg : https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/vaapi_transcode.c.


This works fine when transcoding an AVI file with MJPEG inside to H264 but it fails when using a MJPEG stream coming from a webcam as input.
In this case, the encoder says :


[h264_vaapi @ 0x5555555e5140] No usable encoding profile found.



Below the code of the FFmpeg example I modified to connect on webcam instead of opening input file :


/*
 * Permission is hereby granted, free of charge, to any person obtaining a copy
 * of this software and associated documentation files (the "Software"), to deal
 * in the Software without restriction, including without limitation the rights
 * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
 * copies of the Software, and to permit persons to whom the Software is
 * furnished to do so, subject to the following conditions:
 *
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 *
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
 * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
 * THE SOFTWARE.
 */

/**
 * @file Intel VAAPI-accelerated transcoding API usage example
 * @example vaapi_transcode.c
 *
 * Perform VAAPI-accelerated transcoding.
 * Usage: vaapi_transcode input_stream codec output_stream
 * e.g: - vaapi_transcode input.mp4 h264_vaapi output_h264.mp4
 * - vaapi_transcode input.mp4 vp9_vaapi output_vp9.ivf
 */

#include 
#include 
#include <iostream>

//#define USE_INPUT_FILE

extern "C"{
#include <libavutil></libavutil>hwcontext.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavdevice></libavdevice>avdevice.h>
}

static AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
static AVBufferRef *hw_device_ctx = NULL;
static AVCodecContext *decoder_ctx = NULL, *encoder_ctx = NULL;
static int video_stream = -1;
static AVStream *ost;
static int initialized = 0;

static enum AVPixelFormat get_vaapi_format(AVCodecContext *ctx,
 const enum AVPixelFormat *pix_fmts)
{
 const enum AVPixelFormat *p;

 for (p = pix_fmts; *p != AV_PIX_FMT_NONE; p++) {
 if (*p == AV_PIX_FMT_VAAPI)
 return *p;
 }

 std::cout << "Unable to decode this file using VA-API." << std::endl;
 return AV_PIX_FMT_NONE;
}

static int open_input_file(const char *filename)
{
 int ret;
 AVCodec *decoder = NULL;
 AVStream *video = NULL;
 AVDictionary *pInputOptions = nullptr;

#ifdef USE_INPUT_FILE
 if ((ret = avformat_open_input(&ifmt_ctx, filename, NULL, NULL)) < 0) {
 char errMsg[1024] = {0};
 std::cout << "Cannot open input file '" << filename << "', Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 return ret;
 }
#else
 avdevice_register_all();
 av_dict_set(&pInputOptions, "input_format", "mjpeg", 0);
 av_dict_set(&pInputOptions, "framerate", "30", 0);
 av_dict_set(&pInputOptions, "video_size", "640x480", 0);

 if ((ret = avformat_open_input(&ifmt_ctx, "/dev/video0", NULL, &pInputOptions)) < 0) {
 char errMsg[1024] = {0};
 std::cout << "Cannot open input file '" << filename << "', Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 return ret;
 }
#endif

 ifmt_ctx->flags |= AVFMT_FLAG_NONBLOCK;

 if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) < 0) {
 char errMsg[1024] = {0};
 std::cout << "Cannot find input stream information. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 return ret;
 }

 ret = av_find_best_stream(ifmt_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, &decoder, 0);
 if (ret < 0) {
 char errMsg[1024] = {0};
 std::cout << "Cannot find a video stream in the input file. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 return ret;
 }
 video_stream = ret;

 if (!(decoder_ctx = avcodec_alloc_context3(decoder)))
 return AVERROR(ENOMEM);

 video = ifmt_ctx->streams[video_stream];
 if ((ret = avcodec_parameters_to_context(decoder_ctx, video->codecpar)) < 0) {
 char errMsg[1024] = {0};
 std::cout << "avcodec_parameters_to_context error. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 return ret;
 }

 decoder_ctx->hw_device_ctx = av_buffer_ref(hw_device_ctx);
 if (!decoder_ctx->hw_device_ctx) {
 std::cout << "A hardware device reference create failed." << std::endl;
 return AVERROR(ENOMEM);
 }
 decoder_ctx->get_format = get_vaapi_format;

 if ((ret = avcodec_open2(decoder_ctx, decoder, NULL)) < 0)
 {
 char errMsg[1024] = {0};
 std::cout << "Failed to open codec for decoding. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 }

 return ret;
}

static int encode_write(AVPacket *enc_pkt, AVFrame *frame)
{
 int ret = 0;

 av_packet_unref(enc_pkt);

 AVHWDeviceContext *pHwDevCtx = reinterpret_cast(encoder_ctx->hw_device_ctx);
 AVHWFramesContext *pHwFrameCtx = reinterpret_cast(encoder_ctx->hw_frames_ctx);

 if ((ret = avcodec_send_frame(encoder_ctx, frame)) < 0) {
 char errMsg[1024] = {0};
 std::cout << "Error during encoding. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 goto end;
 }
 while (1) {
 ret = avcodec_receive_packet(encoder_ctx, enc_pkt);
 if (ret)
 break;

 enc_pkt->stream_index = 0;
 av_packet_rescale_ts(enc_pkt, ifmt_ctx->streams[video_stream]->time_base,
 ofmt_ctx->streams[0]->time_base);
 ret = av_interleaved_write_frame(ofmt_ctx, enc_pkt);
 if (ret < 0) {
 char errMsg[1024] = {0};
 std::cout << "Error during writing data to output file. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 return -1;
 }
 }

end:
 if (ret == AVERROR_EOF)
 return 0;
 ret = ((ret == AVERROR(EAGAIN)) ? 0:-1);
 return ret;
}

static int dec_enc(AVPacket *pkt, const AVCodec *enc_codec, AVCodecContext *pDecCtx)
{
 AVFrame *frame;
 int ret = 0;

 ret = avcodec_send_packet(decoder_ctx, pkt);
 if (ret < 0) {
 char errMsg[1024] = {0};
 std::cout << "Error during decoding. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 return ret;
 }

 while (ret >= 0) {
 if (!(frame = av_frame_alloc()))
 return AVERROR(ENOMEM);

 ret = avcodec_receive_frame(decoder_ctx, frame);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
 av_frame_free(&frame);
 return 0;
 } else if (ret < 0) {
 char errMsg[1024] = {0};
 std::cout << "Error while decoding. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 goto fail;
 }

 if (!initialized) {
 AVHWFramesContext *pHwFrameCtx = reinterpret_cast(decoder_ctx->hw_frames_ctx);
 
 /* we need to ref hw_frames_ctx of decoder to initialize encoder's codec.
 Only after we get a decoded frame, can we obtain its hw_frames_ctx */
 encoder_ctx->hw_frames_ctx = av_buffer_ref(pDecCtx->hw_frames_ctx);
 if (!encoder_ctx->hw_frames_ctx) {
 ret = AVERROR(ENOMEM);
 goto fail;
 }
 /* set AVCodecContext Parameters for encoder, here we keep them stay
 * the same as decoder.
 * xxx: now the sample can't handle resolution change case.
 */
 if(encoder_ctx->time_base.den == 1 && encoder_ctx->time_base.num == 0)
 {
 encoder_ctx->time_base = av_inv_q(ifmt_ctx->streams[video_stream]->avg_frame_rate);
 }
 else
 {
 encoder_ctx->time_base = av_inv_q(decoder_ctx->framerate);
 }
 encoder_ctx->pix_fmt = AV_PIX_FMT_VAAPI;
 encoder_ctx->width = decoder_ctx->width;
 encoder_ctx->height = decoder_ctx->height;

 if ((ret = avcodec_open2(encoder_ctx, enc_codec, NULL)) < 0) {
 char errMsg[1024] = {0};
 std::cout << "Failed to open encode codec. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 goto fail;
 }

 if (!(ost = avformat_new_stream(ofmt_ctx, enc_codec))) {
 std::cout << "Failed to allocate stream for output format." << std::endl;
 ret = AVERROR(ENOMEM);
 goto fail;
 }

 ost->time_base = encoder_ctx->time_base;
 ret = avcodec_parameters_from_context(ost->codecpar, encoder_ctx);
 if (ret < 0) {
 char errMsg[1024] = {0};
 std::cout << "Failed to copy the stream parameters. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 goto fail;
 }

 /* write the stream header */
 if ((ret = avformat_write_header(ofmt_ctx, NULL)) < 0) {
 char errMsg[1024] = {0};
 std::cout << "Error while writing stream header. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 goto fail;
 }

 initialized = 1;
 }

 if ((ret = encode_write(pkt, frame)) < 0)
 std::cout << "Error during encoding and writing." << std::endl;

fail:
 av_frame_free(&frame);
 if (ret < 0)
 return ret;
 }
 return 0;
}

int main(int argc, char **argv)
{
 const AVCodec *enc_codec;
 int ret = 0;
 AVPacket *dec_pkt;

 if (argc != 4) {
 fprintf(stderr, "Usage: %s <input file="file" /> <encode codec="codec"> <output file="file">\n"
 "The output format is guessed according to the file extension.\n"
 "\n", argv[0]);
 return -1;
 }

 ret = av_hwdevice_ctx_create(&hw_device_ctx, AV_HWDEVICE_TYPE_VAAPI, NULL, NULL, 0);
 if (ret < 0) {
 char errMsg[1024] = {0};
 std::cout << "Failed to create a VAAPI device. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 return -1;
 }

 dec_pkt = av_packet_alloc();
 if (!dec_pkt) {
 std::cout << "Failed to allocate decode packet" << std::endl;
 goto end;
 }

 if ((ret = open_input_file(argv[1])) < 0)
 goto end;

 if (!(enc_codec = avcodec_find_encoder_by_name(argv[2]))) {
 std::cout << "Could not find encoder '" << argv[2] << "'" << std::endl;
 ret = -1;
 goto end;
 }

 if ((ret = (avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, argv[3]))) < 0) {
 char errMsg[1024] = {0};
 std::cout << "Failed to deduce output format from file extension. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 goto end;
 }

 if (!(encoder_ctx = avcodec_alloc_context3(enc_codec))) {
 ret = AVERROR(ENOMEM);
 goto end;
 }

 ret = avio_open(&ofmt_ctx->pb, argv[3], AVIO_FLAG_WRITE);
 if (ret < 0) {
 char errMsg[1024] = {0};
 std::cout << "Cannot open output file. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 goto end;
 }

 /* read all packets and only transcoding video */
 while (ret >= 0) {
 if ((ret = av_read_frame(ifmt_ctx, dec_pkt)) < 0)
 break;

 if (video_stream == dec_pkt->stream_index)
 ret = dec_enc(dec_pkt, enc_codec, decoder_ctx);

 av_packet_unref(dec_pkt);
 }

 /* flush decoder */
 av_packet_unref(dec_pkt);
 ret = dec_enc(dec_pkt, enc_codec, decoder_ctx);

 /* flush encoder */
 ret = encode_write(dec_pkt, NULL);

 /* write the trailer for output stream */
 av_write_trailer(ofmt_ctx);

end:
 avformat_close_input(&ifmt_ctx);
 avformat_close_input(&ofmt_ctx);
 avcodec_free_context(&decoder_ctx);
 avcodec_free_context(&encoder_ctx);
 av_buffer_unref(&hw_device_ctx);
 av_packet_free(&dec_pkt);
 return ret;
}
</output></encode></iostream>


And the content of the associated CMakeLists.txt file to build it using gcc :


cmake_minimum_required(VERSION 3.5)

include(FetchContent)

set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)

set(CMAKE_VERBOSE_MAKEFILE ON)

SET (FFMPEG_HW_TRANSCODE_INCS
 ${CMAKE_CURRENT_LIST_DIR})

include_directories(
 ${CMAKE_INCLUDE_PATH}
 ${CMAKE_CURRENT_LIST_DIR}
)

project(FFmpeg_HW_transcode LANGUAGES CXX)

set(CMAKE_CXX_FLAGS "-Wall -Werror=return-type -pedantic -fPIC -gdwarf-4")
set(CMAKE_CPP_FLAGS "-Wall -Werror=return-type -pedantic -fPIC -gdwarf-4")

set(EXECUTABLE_OUTPUT_PATH "${CMAKE_CURRENT_LIST_DIR}/build/${CMAKE_BUILD_TYPE}/FFmpeg_HW_transcode")
set(LIBRARY_OUTPUT_PATH "${CMAKE_CURRENT_LIST_DIR}/build/${CMAKE_BUILD_TYPE}/FFmpeg_HW_transcode")

add_executable(${PROJECT_NAME})

target_sources(${PROJECT_NAME} PRIVATE
 vaapi_transcode.cpp)

target_link_libraries(${PROJECT_NAME}
 -L${CMAKE_CURRENT_LIST_DIR}/../build/${CMAKE_BUILD_TYPE}/FFmpeg_HW_transcode
 -lavdevice
 -lavformat
 -lavutil
 -lavcodec)



Has anyone tried to do this kind of stuff ?


Thanks for your help.


-
FFmpeg C++ API : Using HW acceleration (VAAPI) to transcode video coming from a webcam
17 avril 2024, par nicohI'm actually trying to use HW acceleration with the FFmpeg C++ API in order to transcode the video coming from a webcam (which may vary from one config to another) into a given output format (i.e : converting the video stream coming from the webcam in MJPEG to H264 so that it can be written into a MP4 file).


I already succeeded to achieve this by transferring the AVFrame output by the HW decoder from GPU to CPU, then transfer this to the HW encoder input (so from CPU to GPU).
This is not so optimized and on top of that, for the given above config (MJPEG => H264), I cannot provide the output of the decoder as an input for the encoder as the MJPEG HW decoder wants to output in RGBA pixel format, and the H264 encoder wants NV12. So I have to perform pixel format conversion on CPU side.


That's why I would like to connect the output of the HW video decoder directly to the input of the HW encoder (inside the GPU).
To do this, I followed this example given by FFmpeg : https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/vaapi_transcode.c.


This works fine when transcoding an AVI file with MJPEG inside to H264 but it fails when using a MJPEG stream coming from a webcam as input.
In this case, the encoder says :


[h264_vaapi @ 0x5555555e5140] No usable encoding profile found.



Below the code of the FFmpeg example I modified to connect on webcam instead of opening input file :


/*
 * Permission is hereby granted, free of charge, to any person obtaining a copy
 * of this software and associated documentation files (the "Software"), to deal
 * in the Software without restriction, including without limitation the rights
 * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
 * copies of the Software, and to permit persons to whom the Software is
 * furnished to do so, subject to the following conditions:
 *
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 *
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
 * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
 * THE SOFTWARE.
 */

/**
 * @file Intel VAAPI-accelerated transcoding API usage example
 * @example vaapi_transcode.c
 *
 * Perform VAAPI-accelerated transcoding.
 * Usage: vaapi_transcode input_stream codec output_stream
 * e.g: - vaapi_transcode input.mp4 h264_vaapi output_h264.mp4
 * - vaapi_transcode input.mp4 vp9_vaapi output_vp9.ivf
 */

#include 
#include 
#include <iostream>

//#define USE_INPUT_FILE

extern "C"{
#include <libavutil></libavutil>hwcontext.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavdevice></libavdevice>avdevice.h>
}

static AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
static AVBufferRef *hw_device_ctx = NULL;
static AVCodecContext *decoder_ctx = NULL, *encoder_ctx = NULL;
static int video_stream = -1;
static AVStream *ost;
static int initialized = 0;

static enum AVPixelFormat get_vaapi_format(AVCodecContext *ctx,
 const enum AVPixelFormat *pix_fmts)
{
 const enum AVPixelFormat *p;

 for (p = pix_fmts; *p != AV_PIX_FMT_NONE; p++) {
 if (*p == AV_PIX_FMT_VAAPI)
 return *p;
 }

 std::cout << "Unable to decode this file using VA-API." << std::endl;
 return AV_PIX_FMT_NONE;
}

static int open_input_file(const char *filename)
{
 int ret;
 AVCodec *decoder = NULL;
 AVStream *video = NULL;
 AVDictionary *pInputOptions = nullptr;

#ifdef USE_INPUT_FILE
 if ((ret = avformat_open_input(&ifmt_ctx, filename, NULL, NULL)) < 0) {
 char errMsg[1024] = {0};
 std::cout << "Cannot open input file '" << filename << "', Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 return ret;
 }
#else
 avdevice_register_all();
 av_dict_set(&pInputOptions, "input_format", "mjpeg", 0);
 av_dict_set(&pInputOptions, "framerate", "30", 0);
 av_dict_set(&pInputOptions, "video_size", "640x480", 0);

 if ((ret = avformat_open_input(&ifmt_ctx, "/dev/video0", NULL, &pInputOptions)) < 0) {
 char errMsg[1024] = {0};
 std::cout << "Cannot open input file '" << filename << "', Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 return ret;
 }
#endif

 ifmt_ctx->flags |= AVFMT_FLAG_NONBLOCK;

 if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) < 0) {
 char errMsg[1024] = {0};
 std::cout << "Cannot find input stream information. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 return ret;
 }

 ret = av_find_best_stream(ifmt_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, &decoder, 0);
 if (ret < 0) {
 char errMsg[1024] = {0};
 std::cout << "Cannot find a video stream in the input file. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 return ret;
 }
 video_stream = ret;

 if (!(decoder_ctx = avcodec_alloc_context3(decoder)))
 return AVERROR(ENOMEM);

 video = ifmt_ctx->streams[video_stream];
 if ((ret = avcodec_parameters_to_context(decoder_ctx, video->codecpar)) < 0) {
 char errMsg[1024] = {0};
 std::cout << "avcodec_parameters_to_context error. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 return ret;
 }

 decoder_ctx->hw_device_ctx = av_buffer_ref(hw_device_ctx);
 if (!decoder_ctx->hw_device_ctx) {
 std::cout << "A hardware device reference create failed." << std::endl;
 return AVERROR(ENOMEM);
 }
 decoder_ctx->get_format = get_vaapi_format;

 if ((ret = avcodec_open2(decoder_ctx, decoder, NULL)) < 0)
 {
 char errMsg[1024] = {0};
 std::cout << "Failed to open codec for decoding. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 }

 return ret;
}

static int encode_write(AVPacket *enc_pkt, AVFrame *frame)
{
 int ret = 0;

 av_packet_unref(enc_pkt);

 AVHWDeviceContext *pHwDevCtx = reinterpret_cast(encoder_ctx->hw_device_ctx);
 AVHWFramesContext *pHwFrameCtx = reinterpret_cast(encoder_ctx->hw_frames_ctx);

 if ((ret = avcodec_send_frame(encoder_ctx, frame)) < 0) {
 char errMsg[1024] = {0};
 std::cout << "Error during encoding. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 goto end;
 }
 while (1) {
 ret = avcodec_receive_packet(encoder_ctx, enc_pkt);
 if (ret)
 break;

 enc_pkt->stream_index = 0;
 av_packet_rescale_ts(enc_pkt, ifmt_ctx->streams[video_stream]->time_base,
 ofmt_ctx->streams[0]->time_base);
 ret = av_interleaved_write_frame(ofmt_ctx, enc_pkt);
 if (ret < 0) {
 char errMsg[1024] = {0};
 std::cout << "Error during writing data to output file. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 return -1;
 }
 }

end:
 if (ret == AVERROR_EOF)
 return 0;
 ret = ((ret == AVERROR(EAGAIN)) ? 0:-1);
 return ret;
}

static int dec_enc(AVPacket *pkt, const AVCodec *enc_codec, AVCodecContext *pDecCtx)
{
 AVFrame *frame;
 int ret = 0;

 ret = avcodec_send_packet(decoder_ctx, pkt);
 if (ret < 0) {
 char errMsg[1024] = {0};
 std::cout << "Error during decoding. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 return ret;
 }

 while (ret >= 0) {
 if (!(frame = av_frame_alloc()))
 return AVERROR(ENOMEM);

 ret = avcodec_receive_frame(decoder_ctx, frame);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
 av_frame_free(&frame);
 return 0;
 } else if (ret < 0) {
 char errMsg[1024] = {0};
 std::cout << "Error while decoding. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 goto fail;
 }

 if (!initialized) {
 AVHWFramesContext *pHwFrameCtx = reinterpret_cast(decoder_ctx->hw_frames_ctx);
 
 /* we need to ref hw_frames_ctx of decoder to initialize encoder's codec.
 Only after we get a decoded frame, can we obtain its hw_frames_ctx */
 encoder_ctx->hw_frames_ctx = av_buffer_ref(pDecCtx->hw_frames_ctx);
 if (!encoder_ctx->hw_frames_ctx) {
 ret = AVERROR(ENOMEM);
 goto fail;
 }
 /* set AVCodecContext Parameters for encoder, here we keep them stay
 * the same as decoder.
 * xxx: now the sample can't handle resolution change case.
 */
 if(encoder_ctx->time_base.den == 1 && encoder_ctx->time_base.num == 0)
 {
 encoder_ctx->time_base = av_inv_q(ifmt_ctx->streams[video_stream]->avg_frame_rate);
 }
 else
 {
 encoder_ctx->time_base = av_inv_q(decoder_ctx->framerate);
 }
 encoder_ctx->pix_fmt = AV_PIX_FMT_VAAPI;
 encoder_ctx->width = decoder_ctx->width;
 encoder_ctx->height = decoder_ctx->height;

 if ((ret = avcodec_open2(encoder_ctx, enc_codec, NULL)) < 0) {
 char errMsg[1024] = {0};
 std::cout << "Failed to open encode codec. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 goto fail;
 }

 if (!(ost = avformat_new_stream(ofmt_ctx, enc_codec))) {
 std::cout << "Failed to allocate stream for output format." << std::endl;
 ret = AVERROR(ENOMEM);
 goto fail;
 }

 ost->time_base = encoder_ctx->time_base;
 ret = avcodec_parameters_from_context(ost->codecpar, encoder_ctx);
 if (ret < 0) {
 char errMsg[1024] = {0};
 std::cout << "Failed to copy the stream parameters. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 goto fail;
 }

 /* write the stream header */
 if ((ret = avformat_write_header(ofmt_ctx, NULL)) < 0) {
 char errMsg[1024] = {0};
 std::cout << "Error while writing stream header. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 goto fail;
 }

 initialized = 1;
 }

 if ((ret = encode_write(pkt, frame)) < 0)
 std::cout << "Error during encoding and writing." << std::endl;

fail:
 av_frame_free(&frame);
 if (ret < 0)
 return ret;
 }
 return 0;
}

int main(int argc, char **argv)
{
 const AVCodec *enc_codec;
 int ret = 0;
 AVPacket *dec_pkt;

 if (argc != 4) {
 fprintf(stderr, "Usage: %s <input file="file" /> <encode codec="codec"> <output file="file">\n"
 "The output format is guessed according to the file extension.\n"
 "\n", argv[0]);
 return -1;
 }

 ret = av_hwdevice_ctx_create(&hw_device_ctx, AV_HWDEVICE_TYPE_VAAPI, NULL, NULL, 0);
 if (ret < 0) {
 char errMsg[1024] = {0};
 std::cout << "Failed to create a VAAPI device. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 return -1;
 }

 dec_pkt = av_packet_alloc();
 if (!dec_pkt) {
 std::cout << "Failed to allocate decode packet" << std::endl;
 goto end;
 }

 if ((ret = open_input_file(argv[1])) < 0)
 goto end;

 if (!(enc_codec = avcodec_find_encoder_by_name(argv[2]))) {
 std::cout << "Could not find encoder '" << argv[2] << "'" << std::endl;
 ret = -1;
 goto end;
 }

 if ((ret = (avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, argv[3]))) < 0) {
 char errMsg[1024] = {0};
 std::cout << "Failed to deduce output format from file extension. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 goto end;
 }

 if (!(encoder_ctx = avcodec_alloc_context3(enc_codec))) {
 ret = AVERROR(ENOMEM);
 goto end;
 }

 ret = avio_open(&ofmt_ctx->pb, argv[3], AVIO_FLAG_WRITE);
 if (ret < 0) {
 char errMsg[1024] = {0};
 std::cout << "Cannot open output file. Error code: " << av_make_error_string(errMsg, 1024, ret) << std::endl;
 goto end;
 }

 /* read all packets and only transcoding video */
 while (ret >= 0) {
 if ((ret = av_read_frame(ifmt_ctx, dec_pkt)) < 0)
 break;

 if (video_stream == dec_pkt->stream_index)
 ret = dec_enc(dec_pkt, enc_codec, decoder_ctx);

 av_packet_unref(dec_pkt);
 }

 /* flush decoder */
 av_packet_unref(dec_pkt);
 ret = dec_enc(dec_pkt, enc_codec, decoder_ctx);

 /* flush encoder */
 ret = encode_write(dec_pkt, NULL);

 /* write the trailer for output stream */
 av_write_trailer(ofmt_ctx);

end:
 avformat_close_input(&ifmt_ctx);
 avformat_close_input(&ofmt_ctx);
 avcodec_free_context(&decoder_ctx);
 avcodec_free_context(&encoder_ctx);
 av_buffer_unref(&hw_device_ctx);
 av_packet_free(&dec_pkt);
 return ret;
}
</output></encode></iostream>


And the content of the associated CMakeLists.txt file to build it using gcc :


cmake_minimum_required(VERSION 3.5)

include(FetchContent)

set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)

set(CMAKE_VERBOSE_MAKEFILE ON)

SET (FFMPEG_HW_TRANSCODE_INCS
 ${CMAKE_CURRENT_LIST_DIR})

include_directories(
 ${CMAKE_INCLUDE_PATH}
 ${CMAKE_CURRENT_LIST_DIR}
)

project(FFmpeg_HW_transcode LANGUAGES CXX)

set(CMAKE_CXX_FLAGS "-Wall -Werror=return-type -pedantic -fPIC -gdwarf-4")
set(CMAKE_CPP_FLAGS "-Wall -Werror=return-type -pedantic -fPIC -gdwarf-4")

set(EXECUTABLE_OUTPUT_PATH "${CMAKE_CURRENT_LIST_DIR}/build/${CMAKE_BUILD_TYPE}/FFmpeg_HW_transcode")
set(LIBRARY_OUTPUT_PATH "${CMAKE_CURRENT_LIST_DIR}/build/${CMAKE_BUILD_TYPE}/FFmpeg_HW_transcode")

add_executable(${PROJECT_NAME})

target_sources(${PROJECT_NAME} PRIVATE
 vaapi_transcode.cpp)

target_link_libraries(${PROJECT_NAME}
 -L${CMAKE_CURRENT_LIST_DIR}/../build/${CMAKE_BUILD_TYPE}/FFmpeg_HW_transcode
 -lavdevice
 -lavformat
 -lavutil
 -lavcodec)



Has anyone tried to do this kind of stuff ?


Thanks for your help.