Recherche avancée

Médias (0)

Mot : - Tags -/signalement

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (88)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • D’autres logiciels intéressants

    12 avril 2011, par

    On ne revendique pas d’être les seuls à faire ce que l’on fait ... et on ne revendique surtout pas d’être les meilleurs non plus ... Ce que l’on fait, on essaie juste de le faire bien, et de mieux en mieux...
    La liste suivante correspond à des logiciels qui tendent peu ou prou à faire comme MediaSPIP ou que MediaSPIP tente peu ou prou à faire pareil, peu importe ...
    On ne les connais pas, on ne les a pas essayé, mais vous pouvez peut être y jeter un coup d’oeil.
    Videopress
    Site Internet : (...)

Sur d’autres sites (6058)

  • Understanding Data Processing Agreements and How They Affect GDPR Compliance

    9 octobre 2023, par Erin — GDPR

    The General Data Protection Regulation (GDPR) impacts international organisations that conduct business or handle personal data in the European Union (EU), and they must know how to stay compliant.

    One way of ensuring GDPR compliance is through implementing a data processing agreement (DPA). Most businesses overlook DPAs when considering ways of maintaining user data security. So, what exactly is a DPA’s role in ensuring GDPR compliance ?

    In this article, we’ll discuss DPAs, their advantages, which data protection laws require them and the clauses that make up a DPA. We’ll also discuss the consequences of non-compliance and how you can maintain GDPR compliance using Matomo.

    What is a data processing agreement ?

    A data processing agreement, data protection agreement or data processing addendum is a contractual agreement between a data controller (a company) and a data processor (a third-party service provider.) It defines each party’s rights and obligations regarding data protection.

    A DPA also defines the responsibilities of the controller and the processor and sets out the terms they’ll use for data processing. For instance, when MHP/Team SI sought the services of Matomo (a data processor) to get reliable and compliant web analytics, a DPA helped to outline their responsibilities and liabilities.

    A DPA is one of the basic requirements for GDPR compliance. The GDPR is an EU regulation concerning personal data protection and security. The GDPR is binding on any company that actively collects data from EU residents or citizens, regardless of their location.

    As a business, you need to know what goes into a DPA to identify possible liabilities that may arise if you don’t comply with European data protection laws. For example, having a recurrent security incident can lead to data breaches as you process customer personal data.

    The average data breach cost for 2023 is $4.45 million. This amount includes regulatory fines, containment costs and business losses. As such, a DPA can help you assess the organisational security measures of your data processing methods and define the protocol for reporting a data breach.

    Why is a DPA essential for your business ?

    If your company processes personal data from your customers, such as contact details, you need a DPA to ensure compliance with data security laws like GDPR. You’ll also need a DPA to hire a third party to process your data, e.g., through web analytics or cloud storage.

    But what are the benefits of having a DPA in place ?

    Benefits of a data processing agreement

    A key benefit of signing a DPA is it outlines business terms with a third-party data processor and guarantees compliance with the relevant data privacy laws. A DPA also helps to create an accountability framework between you and your data processor by establishing contractual obligations.

    Additionally, a DPA helps to minimise the risk of unauthorised access to sensitive data. A DPA defines organisational measures that help protect the rights of individuals and safeguard personal data against unauthorised disclosure. Overall, before choosing a data processor, having a DPA ensures that they are capable, compliant and qualified.

    More than 120 countries have already adopted some form of international data protection laws to protect their citizens and their data better. Hence, knowing which laws require a DPA and how you can better ensure compliance is important.

    Which data protection laws require a DPA ?

    Regulatory bodies enact data protection laws to grant consumers greater control over their data and how businesses use it. These laws ensure transparency in data processing and compliance for businesses.

    Data protection laws that require a DPA

    The following are some of the relevant data privacy laws that require you to have a DPA :

    • UK GDPR
    • Brazil LGPD
    • EU GDPR
    • Dubai PDPA
    • Colorado CPA
    • California CCPA/CPRA
    • Virginia VCDPA
    • Connecticut DPA
    • South African POPIA
    • Thailand PDPA

    Companies that don’t adhere to these data protection obligations usually face liabilities such as fines and penalties. With a DPA, you can set clear expectations regarding data processing between you and your customers.

    Review and update any DPAs with third-party processors to ensure compliance with GDPR and the laws we mentioned above. Additionally, confirm that all the relevant clauses are present for compliance with relevant data privacy laws. 

    So, what key data processing clauses should you have in your DPA ? Let’s take a closer look in the next section.

    Key clauses in a data processing agreement

    GDPR provides some general recommendations for what you should state in a DPA.

    Key elements found in a DPA

    Here are the elements you should include :

    Data processing specifications

    Your DPA should address the specific business purposes for data processing, the duration of processing and the categories of data under processing. It should also clearly state the party responsible for maintaining GDPR compliance and who the data subjects are, including their location and nationality.

    Your DPA should also address the data processor and controller’s responsibilities concerning data deletion and contract termination.

    Role of processor

    Your DPA should clearly state what your data processor is responsible for and liable for. Some key responsibilities include record keeping, reporting breaches and maintaining data security.

    Other roles of your data processor include providing you with audit opportunities and cooperating with data protection authorities during inquiries. If you decide to end your contract, the data processor is responsible for deleting or returning data, depending on your agreement.

    Role of controller

    Your DPA should inform the responsibilities of the data controller, which typically include issuing processing instructions to the data processor and directing them on how to handle data processing.

    Your DPA should let you define the lawful data processes the data processor should follow and how you’ll uphold the data protection rights of individuals’ sensitive data.

    Organisational and technical specifications

    Your DPA should define specifications such as how third-party processors encrypt, access and test personal data. It should also include specifications on how the data processor and controller will maintain ongoing data security through various factors such as :

    • State of the technology : Do ‌third-party processors have reliable technology, and can they ensure data security within their systems ?
    • Costs of implementation : Does the data controller’s budget allow them to seek third-party services from industry-leading providers who can guarantee a certain level of security ?
    • Variances in users’ personal freedom : Are there privacy policies and opt-out forms for users to express how they want companies to use their sensitive data ?

    Moreover, your DPA should define how you and your data processor will ensure the confidentiality, availability and integrity of data processing services and systems.

    What are the penalties for DPA GDPR non-compliance ?

    Regulators use GDPR’s stiff fines to encourage data controllers and third-party processors to follow‌ best data security practices. One way of maintaining compliance is through drafting up a DPA with your data processor.

    The DPA should clearly outline the necessary legal requirements and include all the relevant clauses mentioned above. Understand what goes into this agreement since data protection authorities can hold your business accountable for a breach — even if a processor’s error caused it.

    Data protection authorities can issue penalties now that the GDPR is in place. For example, according to Article 83 of the GDPR, penalties for data or privacy breaches or non-compliance can amount to up to €20 million or 4% of your annual revenue.

    There are two tiers of fines : tier one and tier two. Violations related to data processors typically attract fines on the tier-one level. Tier one fines can cost your business €10 million or 2% of your company’s global revenue.

    Tier-two fines result from infringement of the right to forget and the right to privacy of your consumer. Tier-two fines can cost your business up to €20 million or 4% of your company’s global revenue.

    GDPR fines make non-compliance an expensive mistake for businesses of all sizes. As such, signing a DPA with any party that acts as a data processor for your business can help you remain GDPR-compliant.

    How a DPA can help your business remain GDPR compliant

    A DPA can help your business define and adhere to lawful data processes.

    Steps to take to be DPA GDPR compliant

    So, in what other ways can a DPA help you to remain compliant with GDPR ? Let’s take a look !

    1. Assess data processor’s compliance

    Having a DPA helps ensure that the data processor you are working with is GDPR-compliant. You should check if they have a DPA and confirm the processor’s terms of service and legal basis.

    For example, if you want an alternative to Google Analytics that’s GDPR compliant, then you can opt for Matomo. Matomo features a DPA, which you can agree to when you sign up for web analytics services or later.

    2. Establish lawful data processes

    A DPA can also help you review your data processes to ensure they’re GDPR compliant. For example, by defining lawful data processes, you better understand personally identifiable information (PII) and how it relates to data privacy.

    Further, you can allow users to opt out of sharing their data. As such, Matomo can help you to enable Do Not Track preferences on your website.

    With this feature, users are given the option to opt in or out of tracking via a toggle in their respective browsers.

    Indeed, establishing lawful data processes helps you define the specific business purposes for collecting and processing personal data. By doing so, you get to notify your users why you need their data and get their consent to process it by including a GDPR-compliant privacy policy on your website.

    3. Anonymise your data

    Global privacy laws like GDPR and ePrivacy mandate companies to display cookie banners or seek consent before tracking visitors’ data. You can either include a cookie consent banner on your site or stop tracking cookies to follow the applicable regulations.

    Further, you can enable cookie-less tracking or easily let users opt out. For example, you can use Matomo without a cookie consent banner, exempting it from many countries’ privacy rules.

    Additionally, through a DPA, you can define organisational measures that define how you’ll anonymise all your users’ data. Matomo can help you anonymise IP addresses, and we recommend that you at least anonymise the last two bytes.

    As one of the few web analytics tools you can use to collect data without tracking consent, Matomo also has the French Data Protection Authority (CNIL) approval.

    4. Assess the processor’s bandwidth

    Having a DPA can help you implement data retention policies that show clear retention periods. Such policies are useful when ending a contract with a third-party service provider and determining how they should handle your data.

    A DPA also helps you ensure the processor has the necessary technology to store personal data securely. You can conduct an audit to understand possible vulnerabilities and your data processor’s technological capacity.

    5. Obtain legal counsel

    When drafting a DPA, it’s important to get a consultation on what is needed to ensure complete compliance. Obtaining legal counsel points you in the right direction so you don’t make any mistakes that may lead to non-compliance.

    Conclusion

    Businesses that process users’ data are subject to several DPA contract requirements under GDPR. One of the most important is having DPAs with every third-party provider that helps them perform data processing.

    It’s important to stay updated on GDPR requirements for compliance. As such, Matomo can help you maintain lawful data processes. Matomo gives you complete control over your data and complies with GDPR requirements.

    To get started with Matomo, you can sign up for a 21-day free trial. No credit card required.

    Disclaimer

    We are not lawyers and don’t claim to be. The information provided here is to help give an introduction to GDPR. We encourage every business and website to take data privacy seriously and discuss these issues with your lawyer if you have any concerns.

  • Ffmpeg error "avcodec_send_frame" return "invalid argument"

    17 octobre 2023, par Paulo Coutinho

    I have a problem in function avcodec_send_frame throwing error Error sending frame for encoding: Invalid argument (-22). I already search, check, recheck and nothing. It is near the ffmpeg examples. Can anyone help me ? Thanks.

    


    This is my code :

    


    static void callbackAddSubtitle(const Message &m, const Response r)
{
    try
    {
        av_log_set_level(AV_LOG_DEBUG);

        spdlog::debug("[Mapping :: callbackAddSubtitle] Adding subtitle...");

        auto inputOpt = m.get("input");
        auto outputOpt = m.get("output");

        if (!inputOpt.has_value() || !outputOpt.has_value())
        {
            r(std::string{"INVALID-PARAMS"});
            return;
        }

        const std::string &input = inputOpt.value();
        const std::string &output = outputOpt.value();

        // initialize input
        spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing input video...");

        AVFormatContext *inputFormatCtx = avformat_alloc_context();
        if (avformat_open_input(&inputFormatCtx, input.c_str(), nullptr, nullptr) != 0)
        {
            spdlog::error("Failed to open input");
            r(std::string{"ERROR-OPEN-INPUT"});
            return;
        }

        if (avformat_find_stream_info(inputFormatCtx, nullptr) < 0)
        {
            spdlog::error("Failed to find stream information");
            avformat_close_input(&inputFormatCtx);
            r(std::string{"ERROR-FIND-STREAM"});
            return;
        }

        int videoStreamIndex = av_find_best_stream(inputFormatCtx, AVMEDIA_TYPE_VIDEO, -1, -1, nullptr, 0);
        if (videoStreamIndex < 0)
        {
            spdlog::error("Could not find a video stream");
            r(std::string{"ERROR-FIND-VIDEO-STREAM"});
            return;
        }

        AVRational timeBase = inputFormatCtx->streams[videoStreamIndex]->time_base;

        AVCodecParameters *inputCodecPar = inputFormatCtx->streams[videoStreamIndex]->codecpar;
        const AVCodec *inputCodec = avcodec_find_decoder(inputCodecPar->codec_id);
        AVCodecContext *inputCodecCtx = avcodec_alloc_context3(inputCodec);

        avcodec_parameters_to_context(inputCodecCtx, inputCodecPar);
        avcodec_open2(inputCodecCtx, inputCodec, nullptr);

        // initialize input audio
        spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing input audio...");

        int audioStreamIndex = av_find_best_stream(inputFormatCtx, AVMEDIA_TYPE_AUDIO, -1, -1, nullptr, 0);
        if (audioStreamIndex < 0)
        {
            spdlog::error("Could not find an audio stream");
            r(std::string{"ERROR-FIND-AUDIO-STREAM"});
            return;
        }

        AVCodecParameters *inputAudioCodecPar = inputFormatCtx->streams[audioStreamIndex]->codecpar;
        const AVCodec *inputAudioCodec = avcodec_find_decoder(inputAudioCodecPar->codec_id);
        AVCodecContext *inputAudioCodecCtx = avcodec_alloc_context3(inputAudioCodec);

        avcodec_parameters_to_context(inputAudioCodecCtx, inputAudioCodecPar);
        avcodec_open2(inputAudioCodecCtx, inputAudioCodec, nullptr);

        // initialize output video
        spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing output video...");

        AVFormatContext *outputFormatCtx = nullptr;
        avformat_alloc_output_context2(&outputFormatCtx, nullptr, nullptr, output.c_str());
        AVStream *outputStream = avformat_new_stream(outputFormatCtx, nullptr);

        AVCodecContext *outputCodecCtx = avcodec_alloc_context3(inputCodec);
        avcodec_parameters_to_context(outputCodecCtx, inputCodecPar);
        int retOutVideo = avcodec_open2(outputCodecCtx, inputCodec, nullptr);

        if (retOutVideo < 0)
        {
            char err[AV_ERROR_MAX_STRING_SIZE];
            av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retOutVideo);
            spdlog::error("Failed to initialize output video: {}", err);
            r(std::string{"ERROR-INIT-OUTPUT-VIDEO"});
            return;
        }

        outputStream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
        outputStream->codecpar->codec_id = inputCodec->id;
        avcodec_parameters_from_context(outputStream->codecpar, outputCodecCtx);

        if (!(outputFormatCtx->oformat->flags & AVFMT_NOFILE))
        {
            avio_open(&outputFormatCtx->pb, output.c_str(), AVIO_FLAG_WRITE);
        }

        const char *pixelFormatName = getPixelFormatName(outputCodecCtx->pix_fmt);
        spdlog::debug("Pixel Format: {}", pixelFormatName);

        // initialize output audio
        spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing output audio...");

        AVStream *outputAudioStream = avformat_new_stream(outputFormatCtx, nullptr);
        AVCodecContext *outputAudioCodecCtx = avcodec_alloc_context3(inputAudioCodec);
        avcodec_parameters_to_context(outputAudioCodecCtx, inputAudioCodecPar);
        int retOutAudio = avcodec_open2(outputAudioCodecCtx, inputAudioCodec, nullptr);

        if (retOutAudio < 0)
        {
            char err[AV_ERROR_MAX_STRING_SIZE];
            av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retOutAudio);
            spdlog::error("Failed to initialize output audio: {}", err);
            r(std::string{"ERROR-INIT-OUTPUT-AUDIO"});
            return;
        }

        outputAudioStream->codecpar->codec_type = AVMEDIA_TYPE_AUDIO;
        outputAudioStream->codecpar->codec_id = inputAudioCodec->id;
        avcodec_parameters_from_context(outputAudioStream->codecpar, outputAudioCodecCtx);

        // initialize filters
        spdlog::debug("[Mapping :: callbackAddSubtitle] Initializing filters...");

        AVFilterGraph *filterGraph = avfilter_graph_alloc();
        if (!filterGraph)
        {
            spdlog::error("Failed to allocate filter graph");
            r(std::string{"ERROR-FILTER-GRAPH"});
            return;
        }

        AVFilterContext *bufferSinkCtx;
        AVFilterContext *bufferSrcCtx;

        const AVFilter *bufferSink = avfilter_get_by_name("buffersink");
        const AVFilter *bufferSrc = avfilter_get_by_name("buffer");

        // input filter
        char filterInArgs[512];
        snprintf(filterInArgs, sizeof(filterInArgs), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d", inputCodecPar->width, inputCodecPar->height, inputCodecCtx->pix_fmt, timeBase.num, timeBase.den, inputCodecCtx->sample_aspect_ratio.num, inputCodecCtx->sample_aspect_ratio.den);

        spdlog::debug("[Mapping :: callbackAddSubtitle] Buffer src args: {}", filterInArgs);

        int retFilterIn = avfilter_graph_create_filter(&bufferSrcCtx, bufferSrc, "in", filterInArgs, nullptr, filterGraph);
        if (retFilterIn < 0)
        {
            char err[AV_ERROR_MAX_STRING_SIZE];
            av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retFilterIn);
            spdlog::error("Failed to create bufferSrcCtx: {}", err);
            r(std::string{"ERROR-CREATE-FILTER-SRC"});
            return;
        }

        // output filter
        int retFilterOut = avfilter_graph_create_filter(&bufferSinkCtx, bufferSink, "out", nullptr, nullptr, filterGraph);

        if (retFilterOut < 0)
        {
            char err[AV_ERROR_MAX_STRING_SIZE];
            av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retFilterOut);
            spdlog::error("Failed to create bufferSinkCtx: {}", err);
            r(std::string{"ERROR-CREATE-FILTER-SINK"});
            return;
        }

        enum AVPixelFormat pix_fmts[] = {AV_PIX_FMT_YUV420P, AV_PIX_FMT_NONE};
        av_opt_set_int_list(bufferSinkCtx, "pix_fmts", pix_fmts, AV_PIX_FMT_NONE, AV_OPT_SEARCH_CHILDREN);

        // add filters to graph and link them
        const char *filterSpec = "drawtext=text='Legenda Adicionada Automaticamente Via FFMPEG e C++': fontcolor=yellow: bordercolor=black: fontfile='/Users/paulo/Downloads/roboto/Roboto-Black.ttf'";
        const AVFilter *filter = avfilter_get_by_name("drawtext");

        AVFilterInOut *outputs = avfilter_inout_alloc();
        AVFilterInOut *inputs = avfilter_inout_alloc();

        outputs->name = av_strdup("in");
        outputs->filter_ctx = bufferSrcCtx;
        outputs->pad_idx = 0;
        outputs->next = nullptr;
        inputs->name = av_strdup("out");
        inputs->filter_ctx = bufferSinkCtx;
        inputs->pad_idx = 0;
        inputs->next = nullptr;

        if (avfilter_graph_parse_ptr(filterGraph, filterSpec, &inputs, &outputs, nullptr) < 0)
        {
            spdlog::error("Failed to parse filter graph");
            r(std::string{"ERROR-PARSE-FILTER"});
            return;
        }

        if (avfilter_graph_config(filterGraph, nullptr) < 0)
        {
            spdlog::error("Failed to configure filter graph");
            r(std::string{"ERROR-CONFIG-FILTER"});
            return;
        }

        // header
        spdlog::debug("[Mapping :: callbackAddSubtitle] Writing header...");

        if (avformat_write_header(outputFormatCtx, nullptr) < 0)
        {
            spdlog::error("Error writing header");
            r(std::string{"ERROR-WRITE-HEADER"});
            return;
        }

        // read frames and write to output
        AVPacket *packet = av_packet_alloc();
        AVFrame *frame = av_frame_alloc();

        frame->format = inputCodecCtx->pix_fmt;
        frame->width = inputCodecCtx->width;
        frame->height = inputCodecCtx->height;

        AVFrame *filt_frame = av_frame_alloc();

        filt_frame->format = inputCodecCtx->pix_fmt;
        filt_frame->width = inputCodecCtx->width;
        filt_frame->height = inputCodecCtx->height;

        while (av_read_frame(inputFormatCtx, packet) >= 0)
        {
            if (packet->stream_index == videoStreamIndex)
            {
                if (avcodec_send_packet(inputCodecCtx, packet) < 0)
                {
                    spdlog::error("Error sending packet for decoding");
                    r(std::string{"ERROR-SEND-PACKET-DECODE"});
                    return;
                }

                while (avcodec_receive_frame(inputCodecCtx, frame) == 0)
                {
                    // Envia o quadro decodificado para o gráfico de filtro
                    if (av_buffersrc_add_frame_flags(bufferSrcCtx, frame, AV_BUFFERSRC_FLAG_KEEP_REF) < 0)
                    {
                        spdlog::error("Error while feeding the filtergraph");
                        r(std::string{"ERROR-FEED-FILTERGRAPH"});
                        return;
                    }

                    // Recebe um quadro do gráfico de filtro
                    if (av_buffersink_get_frame(bufferSinkCtx, filt_frame) < 0)
                    {
                        spdlog::error("Error while receiving the filtered frame");
                        r(std::string{"ERROR-RECEIVE-FILTERED-FRAME"});
                        return;
                    }

                    // Envia o quadro decodificado para re-codificação
                    int retSendFrame = avcodec_send_frame(outputCodecCtx, filt_frame);
                    if (retSendFrame < 0)
                    {
                        char err[AV_ERROR_MAX_STRING_SIZE];
                        av_make_error_string(err, AV_ERROR_MAX_STRING_SIZE, retSendFrame);
                        spdlog::error("Error sending frame for encoding: {}", err);
                        r(std::string{"ERROR-SEND-FRAME-ENCODE"});
                        return;
                    }

                    AVPacket *output_packet = av_packet_alloc();
                    output_packet->data = nullptr;
                    output_packet->size = 0;

                    // Re-codifica filt_frame para um pacote
                    if (avcodec_receive_packet(outputCodecCtx, output_packet) == 0)
                    {
                        // Escreve o pacote no fluxo de saída
                        av_write_frame(outputFormatCtx, output_packet);
                        av_packet_unref(output_packet);
                    }

                    av_frame_unref(filt_frame);
                }

                // time
                packet->pts = av_rescale_q_rnd(packet->pts, inputFormatCtx->streams[videoStreamIndex]->time_base, outputFormatCtx->streams[videoStreamIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
                packet->dts = av_rescale_q_rnd(packet->dts, inputFormatCtx->streams[videoStreamIndex]->time_base, outputFormatCtx->streams[videoStreamIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
                packet->duration = av_rescale_q(packet->duration, inputFormatCtx->streams[videoStreamIndex]->time_base, outputFormatCtx->streams[videoStreamIndex]->time_base);
                packet->stream_index = videoStreamIndex;

                // write packet to output video stream
                av_interleaved_write_frame(outputFormatCtx, packet);
            }
            else if (packet->stream_index == audioStreamIndex)
            {
                // rescale timestamps
                packet->pts = av_rescale_q_rnd(packet->pts, inputFormatCtx->streams[audioStreamIndex]->time_base, outputFormatCtx->streams[audioStreamIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
                packet->dts = av_rescale_q_rnd(packet->dts, inputFormatCtx->streams[audioStreamIndex]->time_base, outputFormatCtx->streams[audioStreamIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
                packet->duration = av_rescale_q(packet->duration, inputFormatCtx->streams[audioStreamIndex]->time_base, outputFormatCtx->streams[audioStreamIndex]->time_base);
                packet->stream_index = audioStreamIndex;

                // write packet to output audio stream
                av_interleaved_write_frame(outputFormatCtx, packet);
            }

            av_packet_unref(packet);
        }

        av_packet_free(&packet);
        av_frame_free(&frame);
        av_frame_free(&filt_frame);

        spdlog::debug("[Mapping :: callbackAddSubtitle] Writing trailer...");

        if (av_write_trailer(outputFormatCtx) < 0)
        {
            spdlog::error("Error writing trailer");
            r(std::string{"ERROR-WRITE-TRAILER"});
            return;
        }

        // cleanup
        spdlog::debug("[Mapping :: callbackAddSubtitle] Cleaning...");

        if (!(outputFormatCtx->oformat->flags & AVFMT_NOFILE))
        {
            avio_closep(&outputFormatCtx->pb);
        }

        avcodec_free_context(&inputCodecCtx);
        avcodec_free_context(&inputAudioCodecCtx);
        avcodec_free_context(&outputCodecCtx);
        avcodec_free_context(&outputAudioCodecCtx);

        avformat_free_context(inputFormatCtx);
        avformat_free_context(outputFormatCtx);

        r(std::string{"OK"});
    }
    catch (const std::exception &e)
    {
        spdlog::error("Error: {}", e.what());
        r(std::string{"ERROR"});
    }
}


    


    The error is :

    


    [2023-10-17 06:30:16.936] [debug] [Mapping :: callbackAddSubtitle] Adding subtitle...
[2023-10-17 06:30:16.936] [debug] [Mapping :: callbackAddSubtitle] Initializing input video...
[NULL @ 0x153604a60] Opening '/Users/paulo/Downloads/movie.mp4' for reading
[file @ 0x6000001fd170] Setting default whitelist 'file,crypto,data'
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Format mov,mp4,m4a,3gp,3g2,mj2 probed with size=2048 and score=100
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] ISO: File Type Major Brand: isom
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Unknown dref type 0x206c7275 size 12
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Processing st: 0, edit list 0 - media time: 0, duration: 2669670
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Unknown dref type 0x206c7275 size 12
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Processing st: 1, edit list 0 - media time: 1024, duration: 4272096
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] drop a frame at curr_cts: 0 @ 0
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] Before avformat_find_stream_info() pos: 113542488 bytes read:110788 seeks:1 nb_streams:2
[h264 @ 0x153604cd0] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x153604cd0] Decoding VUI
[h264 @ 0x153604cd0] nal_unit_type: 8(PPS), nal_ref_idc: 3
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] demuxer injecting skip 1024 / discard 0
[aac @ 0x1536056f0] skip 1024 / discard 0 samples due to side data
[h264 @ 0x153604cd0] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x153604cd0] Decoding VUI
[h264 @ 0x153604cd0] nal_unit_type: 8(PPS), nal_ref_idc: 3
[h264 @ 0x153604cd0] nal_unit_type: 6(SEI), nal_ref_idc: 0
[h264 @ 0x153604cd0] nal_unit_type: 5(IDR), nal_ref_idc: 3
[h264 @ 0x153604cd0] Format yuv420p chosen by get_format().
[h264 @ 0x153604cd0] Reinit context to 1088x1920, pix_fmt: yuv420p
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] All info found
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x153604a60] After avformat_find_stream_info() pos: 195211 bytes read:305951 seeks:2 frames:2
[2023-10-17 06:30:18.160] [debug] [Mapping :: callbackAddSubtitle] Initializing input audio...
[h264 @ 0x143604330] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x143604330] Decoding VUI
[h264 @ 0x143604330] nal_unit_type: 8(PPS), nal_ref_idc: 3
[2023-10-17 06:30:18.160] [debug] [Mapping :: callbackAddSubtitle] Initializing output video...
[h264 @ 0x143611ec0] nal_unit_type: 7(SPS), nal_ref_idc: 3
[h264 @ 0x143611ec0] Decoding VUI
[h264 @ 0x143611ec0] nal_unit_type: 8(PPS), nal_ref_idc: 3
[file @ 0x6000001f4000] Setting default whitelist 'file,crypto,data'
[2023-10-17 06:30:18.167] [debug] Pixel Format: YUV420P
[2023-10-17 06:30:18.167] [debug] [Mapping :: callbackAddSubtitle] Initializing output audio...
[2023-10-17 06:30:18.167] [debug] [Mapping :: callbackAddSubtitle] Initializing filters...
[2023-10-17 06:30:18.168] [debug] [Mapping :: callbackAddSubtitle] Buffer src args: video_size=1080x1920:pix_fmt=0:time_base=1/30000:pixel_aspect=1/1
detected 10 logical cores
[in @ 0x6000004ec0b0] Setting 'video_size' to value '1080x1920'
[in @ 0x6000004ec0b0] Setting 'pix_fmt' to value '0'
[in @ 0x6000004ec0b0] Setting 'time_base' to value '1/30000'
[in @ 0x6000004ec0b0] Setting 'pixel_aspect' to value '1/1'
[in @ 0x6000004ec0b0] w:1080 h:1920 pixfmt:yuv420p tb:1/30000 fr:0/1 sar:1/1
[AVFilterGraph @ 0x6000017e8000] Setting 'text' to value 'Legenda Adicionada Automaticamente Via FFMPEG e C++'
[AVFilterGraph @ 0x6000017e8000] Setting 'fontcolor' to value 'yellow'
[AVFilterGraph @ 0x6000017e8000] Setting 'bordercolor' to value 'black'
[AVFilterGraph @ 0x6000017e8000] Setting 'fontfile' to value '/Users/paulo/Downloads/roboto/Roboto-Black.ttf'
[AVFilterGraph @ 0x6000017e8000] query_formats: 3 queried, 2 merged, 0 already done, 0 delayed
[2023-10-17 06:30:18.172] [debug] [Mapping :: callbackAddSubtitle] Writing header...
[h264 @ 0x143604330] nal_unit_type: 6(SEI), nal_ref_idc: 0
[h264 @ 0x143604330] nal_unit_type: 5(IDR), nal_ref_idc: 3
[h264 @ 0x143604330] Format yuv420p chosen by get_format().
[h264 @ 0x143604330] Reinit context to 1088x1920, pix_fmt: yuv420p
[Parsed_drawtext_0 @ 0x6000004f4160] Copying data in avfilter.
[Parsed_drawtext_0 @ 0x6000004f4160] n:0 t:0.000000 text_w:424 text_h:16 x:0 y:0
[2023-10-17 06:30:18.182] [error] Error sending frame for encoding: Invalid argument
Returned Value: ERROR-SEND-FRAME-ENCODE


    


  • Multiple format changes in FFmpeg for thermal camera

    6 février 2023, par Greynol4

    I'm having trouble generating a command to process output from a uvc thermal camera's raw data so that it can be colorized and then output to a virtual device with the intention of streaming it over rtsp. This is on a raspberry pi 3B+ with 32bit bullseye.

    


    The original code that works perfectly for previewing it is :

    


    ffmpeg -input_format yuyv422 -video_size 256x384 -i /dev/video0 -vf 'crop=h=(ih/2):y=(ih/2)' -pix_fmt yuyv422 -f rawvideo - | ffplay -pixel_format gray16le -video_size 256x192 -f rawvideo -i - -vf 'normalize=smoothing=10, format=pix_fmts=rgb48, pseudocolor=p=inferno'


    


    Essentially what this is doing is taking the raw data, cutting the useful portion out, then piping it to ffplay where it is seen as 16bit grayscale (in this case gray16le), then it is normalized, formatted to 48 bit rgb and then a pseudocolor filter is applied.

    


    I haven't been able to get this to translate into ffmpeg-only because it throws codec errors or format errors or converts the 16bit to 10bit even though I need the 16bit. I have tried using v4l2loopback and two instances of ffmpeg in separate windows to see if I could figure out where the error was actually occuring but I suspect that is introducing more format issues that are distracting from the original problem. The closest I have been able to get is

    


    ffmpeg -input_format yuyv422 -video_size 256x384 -i /dev/video0 -vf 'crop=h=(ih/2):y=(ih/2)' -pix_fmt yuyv422 -f rawvideo /dev/video3

    


    Followed by

    


    ffmpeg -video_size 256x192 -i /dev/video3  -f rawvideo -pix_fmt gray16le -vf 'normalize=smoothing=10,format=pix_fmts=rgb48, pseudocolor=p=inferno' -f rawvideo -f v4l2 /dev/video4

    


    This results in a non colorized but somewhat useful image with certain temperatures showing as missing pixels as opposed to the command with ffplay where it shows a properly colorized stream without missing pixels.

    


    I'll include my configuration and log from the preview command but the log doesn't show errors unless I try to modify parameters and presumably mess up the syntax.

    


     ffmpeg -input_format yuyv422 -video_size 256x384 -i /dev/video0 -vf 'crop=h=(ih/2):y=(ih/2)' -pix_fmt yuyv422 -f rawvideo - | ffplay -pixel_format gray16le -video_size 256x192 -f rawvideo -i - -vf 'normalize=smoothing=10, format=pix_fmts=rgb48, pseudocolor=p=inferno'
ffplay version N-109758-gbdc76f467f Copyright (c) 2003-2023 the FFmpeg developers
  built with gcc 10 (Raspbian 10.2.1-6+rpi1)
  configuration: --prefix=/usr/local --enable-nonfree --enable-gpl --enable-hardcoded-tables --disable-ffprobe --disable-ffplay --enable-libx264 --enable-libx265 --enable-sdl --enable-sdl2 --enable-ffplay
  libavutil      57. 44.100 / 57. 44.100
  libavcodec     59. 63.100 / 59. 63.100
  libavformat    59. 38.100 / 59. 38.100
  libavdevice    59.  8.101 / 59.  8.101
  libavfilter     8. 56.100 /  8. 56.100
  libswscale      6.  8.112 /  6.  8.112
  libswresample   4.  9.100 /  4.  9.100
  libpostproc    56.  7.100 / 56.  7.100
ffmpeg version N-109758-gbdc76f467f Copyright (c) 2000-2023 the FFmpeg developers
  built with gcc 10 (Raspbian 10.2.1-6+rpi1)
  configuration: --prefix=/usr/local --enable-nonfree --enable-gpl --enable-hardcoded-tables --disable-ffprobe --disable-ffplay --enable-libx264 --enable-libx265 --enable-sdl --enable-sdl2 --enable-ffplay
  libavutil      57. 44.100 / 57. 44.100
  libavcodec     59. 63.100 / 59. 63.100
  libavformat    59. 38.100 / 59. 38.100
  libavdevice    59.  8.101 / 59.  8.101
  libavfilter     8. 56.100 /  8. 56.100
  libswscale      6.  8.112 /  6.  8.112
  libswresample   4.  9.100 /  4.  9.100
  libpostproc    56.  7.100 / 56.  7.100
Input #0, video4linux2,v4l2, from '/dev/video0':B sq=    0B f=0/0   
  Duration: N/A, start: 242.040935, bitrate: 39321 kb/s
  Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 256x384, 39321 kb/s, 25 fps, 25 tbr, 1000k tbn
Stream mapping:.000 fd=   0 aq=    0KB vq=    0KB sq=    0B f=0/0   
  Stream #0:0 -> #0:0 (rawvideo (native) -> rawvideo (native))
Press [q] to stop, [?] for help
Output #0, rawvideo, to 'pipe:':   0KB vq=    0KB sq=    0B f=0/0   
  Metadata:
    encoder         : Lavf59.38.100
  Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422(tv, progressive), 256x192, q=2-31, 19660 kb/s, 25 fps, 25 tbn
    Metadata:
      encoder         : Lavc59.63.100 rawvideo
frame=    0 fps=0.0 q=0.0 size=       0kB time=-577014:32:22.77 bitrate=  -0.0kbInput #0, rawvideo, from 'fd:':    0KB vq=    0KB sq=    0B f=0/0   
  Duration: N/A, start: 0.000000, bitrate: 19660 kb/s
  Stream #0:0: Video: rawvideo (Y1[0][16] / 0x10003159), gray16le, 256x192, 19660 kb/s, 25 tbr, 25 tbn
frame=   13 fps=0.0 q=-0.0 size=    1152kB time=00:00:00.52 bitrate=18148.4kbitsframe=   25 fps= 24 q=-0.0 size=    2304kB time=00:00:01.00 bitrate=18874.4kbitsframe=   39 fps= 25 q=-0.0 size=    3648kB time=00:00:01.56 bitrate=19156.7kbitsframe=   51 fps= 24 q=-0.0 size=    4800kB time=00:00:02.04 bitrate=19275.3kbitsframe=   64 fps= 24 q=-0.0 size=    6048kB time=00:00:02.56 bitrate=19353.6kbitsframe=   78 fps= 25 q=-0.0 size=    7392kB time=00:00:03.12 bitrate=19408.7kbits




    


    I'd also like to use the correct option so it isn't scrolling though every frame in the log as well as links to resources for adapting a command to a script for beginners even though that's outside the purview of this question so any direction on those would be much appreciated.