Recherche avancée

Médias (91)

Autres articles (40)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

Sur d’autres sites (6340)

  • avdevice/decklink : remove pthread dependency

    15 avril 2017, par Aaron Levinson
    avdevice/decklink : remove pthread dependency
    

    Purpose : avdevice/decklink : Removed pthread dependency by replacing
    semaphore used in code appropriately. Doing so makes it easier to
    build ffmpeg using Visual C++ on Windows. This is a contination of
    Kyle Schwarz's "avdevice/decklink : Remove pthread dependency" patch
    that is available at https://patchwork.ffmpeg.org/patch/2654/ . This
    patch wasn't accepted, and as far as I can tell, there was no
    follow-up after it was rejected.

    Notes : Used Visual Studio 2015 (with update 3) for this.

    Comments :

    — configure : Eliminated pthreads dependency for decklink_indev_deps
    and decklink_outdev_deps and replaced with threads dependency

    — libavdevice/decklink_common.cpp / .h :
    a) Eliminated semaphore and replaced with a combination of a mutex,
    condition variable, and a counter (frames_buffer_available_spots).
    b) Removed include of pthread.h and semaphore.h and now using
    libavutil/thread.h instead.

    — libavdevice/decklink_dec.cpp : Eliminated include of pthread.h and
    semaphore.h.

    — libavdevice/decklink_enc.cpp :
    a) Eliminated include of pthread.h and semaphore.h.
    b) Replaced use of semaphore with the equivalent using a combination
    of a mutex, condition variable, and a counter
    (frames_buffer_available_spots). In theory, libavutil/thread.h and
    the associated code could have been modified instead to add
    cross-platform implementations of the sem_ functions, but an
    inspection of the ffmpeg source base indicates that there are only
    two cases in which semaphores are used (including this one that was
    replaced), so it was deemed to not be worth the effort.

    Signed-off-by : Marton Balint <cus@passwd.hu>

    • [DH] configure
    • [DH] libavdevice/decklink_common.cpp
    • [DH] libavdevice/decklink_common.h
    • [DH] libavdevice/decklink_dec.cpp
    • [DH] libavdevice/decklink_enc.cpp
  • C++ Qt FFMPEG RTMP : getting 0 fps

    7 juin 2021, par Jinwoo Lim

    I've been programming on Qt and developing an RTMP server.

    &#xA;

    I almost finished it, but the server says the sent RTMP video has 0 fps.

    &#xA;

    I think the data are being sent correctly - the server says video in-bytes and audio in-bytes are above 0.

    &#xA;

    Also, whenever I play RTMP video from the server with VLC, it only shows the exact frame that is supposed to be shown at the time VLC connected to the server.

    &#xA;

    My conclusion was that my program sends the frame correctly, but the set fps of RTMP server is 0 so VLC refused to play the RTMP video.

    &#xA;

    Where did I get wrong ?

    &#xA;

    #include "threadcam_rtmp.h"&#xA;#include "global.h"&#xA;#include "rtmpstream.h"&#xA;&#xA;#include <vector>&#xA;&#xA;#include <opencv2></opencv2>highgui.hpp>&#xA;#include <opencv2></opencv2>video.hpp>&#xA;&#xA;extern "C" {&#xA;#include "libavformat/avformat.h"&#xA;#include "libavcodec/avcodec.h"&#xA;#include "libavutil/avutil.h"&#xA;#include "libavutil/audio_fifo.h"&#xA;#include "libavutil/time.h"&#xA;#include "libswscale/swscale.h"&#xA;#include "libavdevice/avdevice.h"&#xA;}&#xA;&#xA;#include <qapplication>&#xA;#include <qdebug>&#xA;#include <qpixmap>&#xA;&#xA;ThreadCam_RTMP::ThreadCam_RTMP(int selectedCam, QMutex &amp;mutex_img):&#xA;    mutex(mutex_img)&#xA;{&#xA;    cam_index = selectedCam;&#xA;    show_on = true;&#xA;}&#xA;&#xA;void ThreadCam_RTMP::run()&#xA;{&#xA;    //Settings for opencv cam&#xA;    cam_index = start_cam_index;&#xA;    if (!cam.open(cam_index)) cam_on = false;&#xA;    cam.set(CV_CAP_PROP_FRAME_WIDTH, img_width);&#xA;    cam.set(CV_CAP_PROP_FRAME_HEIGHT, img_height);&#xA;&#xA;    //Settings for RTMP streaming&#xA;    output = server.c_str();&#xA;&#xA;    av_register_all();&#xA;    avdevice_register_all();&#xA;    avformat_network_init();&#xA;&#xA;    ifmt = av_find_input_format("dshow");&#xA;&#xA;    AVDictionary *device_param = 0;&#xA;&#xA;    //Set audio device&#xA;    if (avformat_open_input(&amp;ifmt_ctx_a, device_name_a, ifmt, &amp;device_param) != 0)&#xA;        qDebug("Couldn&#x27;t open audio stream.");&#xA;&#xA;    //Audio input initialize&#xA;    if (avformat_find_stream_info(ifmt_ctx_a, NULL) &lt; 0)&#xA;        qDebug("Couldn&#x27;t find audio stream information.");&#xA;&#xA;    audioindex = -1;&#xA;    for (int i = 0; i &lt; ifmt_ctx_a->nb_streams; i&#x2B;&#x2B;)&#xA;    {&#xA;        if(ifmt_ctx_a->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO)&#xA;        {&#xA;            audioindex = i;&#xA;            break;&#xA;        }&#xA;    }&#xA;    if (audioindex == -1) qDebug("Couldn&#x27;t find an audio stream");&#xA;    if (avcodec_open2(ifmt_ctx_a->streams[audioindex]->codec, avcodec_find_decoder(ifmt_ctx_a->streams[audioindex]->codec->codec_id), NULL) &lt; 0)&#xA;        qDebug("Couldn&#x27;t open audio codec");&#xA;&#xA;    //Output audio initialize&#xA;    out_codec_a = avcodec_find_encoder(AV_CODEC_ID_AAC);&#xA;    if(!out_codec_a)&#xA;        qDebug("Couldn&#x27;t find output audio encoder.");&#xA;    out_codec_ctx_a = avcodec_alloc_context3(out_codec_a);&#xA;    out_codec_ctx_a->channels = 2;&#xA;    out_codec_ctx_a->channel_layout = av_get_default_channel_layout(2);&#xA;    out_codec_ctx_a->sample_rate = ifmt_ctx_a->streams[audioindex]->codec->sample_rate;&#xA;    out_codec_ctx_a->sample_fmt = out_codec_a->sample_fmts[0];&#xA;    out_codec_ctx_a->bit_rate = bitrate;&#xA;    out_codec_ctx_a->time_base.num = 1;&#xA;    out_codec_ctx_a->time_base.den = out_codec_ctx_a->sample_rate;&#xA;    out_codec_ctx_a->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;&#xA;    if (avcodec_open2(out_codec_ctx_a, out_codec_a, NULL) &lt; 0) qDebug("Couldn&#x27;t open output audio encoder");&#xA;&#xA;    //Output initialize&#xA;    initialize_avformat_context(ofmt_ctx, "flv");&#xA;    initialize_io_context(ofmt_ctx, output);&#xA;&#xA;    out_codec = avcodec_find_encoder(AV_CODEC_ID_FLV1);&#xA;    if(!out_codec)&#xA;        qDebug("Couldn&#x27;t find output video encoder.");&#xA;    out_stream = avformat_new_stream(ofmt_ctx, out_codec);&#xA;    out_codec_ctx = avcodec_alloc_context3(out_codec);&#xA;&#xA;    set_codec_params(ofmt_ctx, out_codec_ctx, img_width, img_height, fps, bitrate);&#xA;    initialize_codec_stream(out_stream, out_codec_ctx, out_codec, codec_profile);&#xA;&#xA;    out_stream->codecpar->extradata = out_codec_ctx->extradata;&#xA;    out_stream->codecpar->extradata_size = out_codec_ctx->extradata_size;&#xA;&#xA;    //Add a new stream to output for muxing&#xA;    out_stream_a = avformat_new_stream(ofmt_ctx, out_codec_a);&#xA;    out_stream_a->time_base.num = 1;&#xA;    out_stream_a->time_base.den = out_codec_ctx_a->sample_rate;&#xA;    out_stream_a->codec = out_codec_ctx_a;&#xA;&#xA;    av_dump_format(ofmt_ctx, 0, output, 1);&#xA;&#xA;    int ret = avformat_write_header(ofmt_ctx, nullptr);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;      qDebug("Could not write header!");&#xA;      QApplication::quit();&#xA;    }&#xA;&#xA;    aud_convert_ctx = swr_alloc_set_opts(NULL,&#xA;            av_get_default_channel_layout(out_codec_ctx_a->channels),&#xA;            out_codec_ctx_a->sample_fmt,&#xA;            out_codec_ctx_a->sample_rate,&#xA;            av_get_default_channel_layout(ifmt_ctx_a->streams[audioindex]->codec->channels),&#xA;            ifmt_ctx_a->streams[audioindex]->codec->sample_fmt,&#xA;            ifmt_ctx_a->streams[audioindex]->codec->sample_rate,&#xA;            0, NULL);&#xA;&#xA;    swr_init(aud_convert_ctx);&#xA;&#xA;    AVRational r_framerate1 = {fps, 1};&#xA;    int64_t calc_duration = (double)(AV_TIME_BASE)*(1 / av_q2d(r_framerate1));&#xA;    int64_t start_time = av_gettime();&#xA;&#xA;    AVAudioFifo *fifo = NULL;&#xA;    fifo = av_audio_fifo_alloc(out_codec_ctx_a->sample_fmt, out_codec_ctx_a->channels, 1);&#xA;&#xA;    uint8_t **converted_input_samples = NULL;&#xA;    if (!(converted_input_samples = (uint8_t**)calloc(out_codec_ctx_a->channels, sizeof(**converted_input_samples))))&#xA;        qDebug("Could not allocate converted input sample pointers");&#xA;&#xA;    int dec_got_frame_a, enc_got_frame_a; &#xA;&#xA;    auto *frame = allocate_frame_buffer(out_codec_ctx, img_width, img_height);&#xA;    auto *swsctx = initialize_sample_scaler(out_codec_ctx, img_width, img_height);&#xA;&#xA;    int64_t vid_pts = 0;&#xA;&#xA;    ofmt_ctx->streams[0]->r_frame_rate = out_codec_ctx->framerate;&#xA;    ofmt_ctx->streams[0]->codec->time_base = out_codec_ctx->time_base;&#xA;&#xA;    while(cam_on)&#xA;    {&#xA;        cv::Mat temp;   // image captured from opencv cam&#xA;        show_on = keep_sending;&#xA;&#xA;        //capture image from webcam&#xA;        if (cam.isOpened())&#xA;        {&#xA;            cam.read(temp);&#xA;            //convert BGR to RGB to show them&#xA;            cv::cvtColor(temp, temp, cv::COLOR_BGR2RGB);&#xA;        }&#xA;&#xA;        if(show_on)&#xA;        {&#xA;            //resize and save read img to global variable cap_img&#xA;            mutex.lock();&#xA;            temp.copyTo(cap_img);&#xA;            QImage qimg = QImage(cap_img.data, img_width, img_height, img_width * img_channels, QImage::Format_RGB888);&#xA;            mutex.unlock();&#xA;            emit ThreadCam_RTMP::setImage(qimg);&#xA;&#xA;        }&#xA;&#xA;        //rtmp streaming&#xA;        if (encode_video || encode_audio)&#xA;        {&#xA;            if (encode_video &amp;&amp; (!encode_audio || av_compare_ts(vid_next_pts, time_base_q, aud_next_pts, time_base_q) &lt;= 0))&#xA;            {&#xA;                mutex.lock();&#xA;                AVPacket pkt = {0};&#xA;                av_init_packet(&amp;pkt);&#xA;&#xA;                frame = av_frame_alloc();&#xA;&#xA;                std::vector framebuf(av_image_get_buffer_size(out_codec_ctx->pix_fmt, img_width, img_height, 1));&#xA;                av_image_fill_arrays(frame->data, frame->linesize, framebuf.data(), out_codec_ctx->pix_fmt,img_width, img_height, 1);&#xA;                frame->width = img_width;&#xA;                frame->height = img_height;&#xA;                frame->format = static_cast<int>(out_codec_ctx->pix_fmt);&#xA;&#xA;                const int stride[] = {static_cast<int>(temp.step[0])};&#xA;                sws_scale(swsctx, &amp;temp.data, stride, 0, temp.rows, frame->data, frame->linesize);&#xA;                frame->pts = vid_pts &#x2B; av_rescale_q(1, out_codec_ctx->time_base, out_stream->time_base);&#xA;                vid_pts = frame->pts;&#xA;                pkt.pts = frame->pts;&#xA;                qDebug() &lt;&lt; frame->pts;&#xA;&#xA;                int ret = avcodec_send_frame(out_codec_ctx, frame);&#xA;                if (ret &lt; 0)&#xA;                {&#xA;                  qDebug("Error sending frame to codec context!");&#xA;                  av_log_set_callback(printerror);&#xA;                }&#xA;&#xA;                ret = avcodec_receive_packet(out_codec_ctx, &amp;pkt);&#xA;                if (ret &lt; 0)&#xA;                {&#xA;                  qDebug("Error receiving packet from codec context!");&#xA;                }&#xA;&#xA;                av_interleaved_write_frame(ofmt_ctx, &amp;pkt);&#xA;                av_packet_unref(&amp;pkt);&#xA;&#xA;                qDebug() &lt;&lt; av_q2d(ofmt_ctx->streams[0]->codec->time_base);&#xA;&#xA;                mutex.unlock();&#xA;                framecnt&#x2B;&#x2B;;&#xA;                vid_next_pts = framecnt * calc_duration;&#xA;            }&#xA;            else&#xA;            {&#xA;                //audio trancoding here&#xA;                const int output_frame_size = out_codec_ctx_a->frame_size;&#xA;&#xA;                /**&#xA;                * Make sure that there is one frame worth of samples in the FIFO&#xA;                * buffer so that the encoder can do its work.&#xA;                * Since the decoder&#x27;s and the encoder&#x27;s frame size may differ, we&#xA;                * need to FIFO buffer to store as many frames worth of input samples&#xA;                * that they make up at least one frame worth of output samples.&#xA;                */&#xA;                while (av_audio_fifo_size(fifo) &lt; output_frame_size)&#xA;                {&#xA;                    /**&#xA;                    * Decode one frame worth of audio samples, convert it to the&#xA;                    * output sample format and put it into the FIFO buffer.&#xA;                    */&#xA;                    AVFrame *input_frame = av_frame_alloc();&#xA;                    if (!input_frame) ret = AVERROR(ENOMEM);&#xA;&#xA;                    /** Decode one frame worth of audio samples. */&#xA;                    /** Packet used for temporary storage. */&#xA;                    AVPacket input_packet;&#xA;                    av_init_packet(&amp;input_packet);&#xA;                    input_packet.data = NULL;&#xA;                    input_packet.size = 0;&#xA;&#xA;                    /** Read one audio frame from the input file into a temporary packet. */&#xA;                    if ((ret = av_read_frame(ifmt_ctx_a, &amp;input_packet)) &lt; 0)&#xA;                    {&#xA;                        /** If we are at the end of the file, flush the decoder below. */&#xA;                        if (ret == AVERROR_EOF) encode_audio = 0;&#xA;                        else qDebug("Could not read audio frame");&#xA;                    }&#xA;&#xA;                    /**&#xA;                    * Decode the audio frame stored in the temporary packet.&#xA;                    * The input audio stream decoder is used to do this.&#xA;                    * If we are at the end of the file, pass an empty packet to the decoder&#xA;                    * to flush it.&#xA;                    */&#xA;                    if ((ret = avcodec_decode_audio4(ifmt_ctx_a->streams[audioindex]->codec, input_frame, &amp;dec_got_frame_a, &amp;input_packet)) &lt; 0)&#xA;                        qDebug("Could not decode audio frame");&#xA;&#xA;                    av_packet_unref(&amp;input_packet);&#xA;                    /** If there is decoded data, convert and store it */&#xA;                    if (dec_got_frame_a)&#xA;                    {&#xA;                        /**&#xA;                        * Allocate memory for the samples of all channels in one consecutive&#xA;                        * block for convenience.&#xA;                        */&#xA;                        if ((ret = av_samples_alloc(converted_input_samples, NULL, out_codec_ctx_a->channels,&#xA;                            input_frame->nb_samples, out_codec_ctx_a->sample_fmt, 0)) &lt; 0)&#xA;                        {&#xA;                            qDebug("Could not allocate converted input samples");&#xA;                            av_freep(&amp;(*converted_input_samples)[0]);&#xA;                            free(*converted_input_samples);&#xA;                        }&#xA;&#xA;                        /**&#xA;                        * Convert the input samples to the desired output sample format.&#xA;                        * This requires a temporary storage provided by converted_input_samples.&#xA;                        */&#xA;                        /** Convert the samples using the resampler. */&#xA;                        if ((ret = swr_convert(aud_convert_ctx, converted_input_samples, input_frame->nb_samples,&#xA;                            (const uint8_t**)input_frame->extended_data, input_frame->nb_samples)) &lt; 0) {&#xA;                            qDebug("Could not convert input samples"); qDebug() &lt;&lt; ret;&#xA;                        }&#xA;&#xA;                        /** Add the converted input samples to the FIFO buffer for later processing. */&#xA;                        /**&#xA;                        * Make the FIFO as large as it needs to be to hold both,&#xA;                        * the old and the new samples.&#xA;                        */&#xA;                        if ((ret = av_audio_fifo_realloc(fifo, av_audio_fifo_size(fifo) &#x2B; input_frame->nb_samples)) &lt; 0)&#xA;                            qDebug("Could not reallocate FIFO");&#xA;&#xA;                        /** Store the new samples in the FIFO buffer. */&#xA;                        if (av_audio_fifo_write(fifo, (void **)converted_input_samples,&#xA;                            input_frame->nb_samples) &lt; input_frame->nb_samples)&#xA;                            qDebug("Could not write data to FIFO");&#xA;                    }&#xA;                }&#xA;&#xA;                /**&#xA;                * If we have enough samples for the encoder, we encode them.&#xA;                * At the end of the file, we pass the remaining samples to&#xA;                * the encoder.&#xA;                */&#xA;                if (av_audio_fifo_size(fifo) >= output_frame_size)&#xA;                    /**&#xA;                    * Take one frame worth of audio samples from the FIFO buffer,&#xA;                    * encode it and write it to the output file.&#xA;                    */&#xA;                {&#xA;                    /** Temporary storage of the output samples of the frame written to the file. */&#xA;                    AVFrame *output_frame = av_frame_alloc();&#xA;                    if (!output_frame) ret = AVERROR(ENOMEM);&#xA;                    /**&#xA;                    * Use the maximum number of possible samples per frame.&#xA;                    * If there is less than the maximum possible frame size in the FIFO&#xA;                    * buffer use this number. Otherwise, use the maximum possible frame size&#xA;                    */&#xA;                    const int frame_size = FFMIN(av_audio_fifo_size(fifo), out_codec_ctx_a->frame_size);&#xA;&#xA;                    /** Initialize temporary storage for one output frame. */&#xA;                    /**&#xA;                    * Set the frame&#x27;s parameters, especially its size and format.&#xA;                    * av_frame_get_buffer needs this to allocate memory for the&#xA;                    * audio samples of the frame.&#xA;                    * Default channel layouts based on the number of channels&#xA;                    * are assumed for simplicity.&#xA;                    */&#xA;                    output_frame->nb_samples = frame_size;&#xA;                    output_frame->channel_layout = out_codec_ctx_a->channel_layout;&#xA;                    output_frame->format = out_codec_ctx_a->sample_fmt;&#xA;                    output_frame->sample_rate = out_codec_ctx_a->sample_rate;&#xA;&#xA;                    /**&#xA;                    * Allocate the samples of the created frame. This call will make&#xA;                    * sure that the audio frame can hold as many samples as specified.&#xA;                    */&#xA;                    if ((ret = av_frame_get_buffer(output_frame, 0)) &lt; 0)&#xA;                    {&#xA;                        qDebug("Could not allocate output frame samples");&#xA;                        av_frame_free(&amp;output_frame);&#xA;                    }&#xA;&#xA;                    /**&#xA;                    * Read as many samples from the FIFO buffer as required to fill the frame.&#xA;                    * The samples are stored in the frame temporarily.&#xA;                    */&#xA;                    if (av_audio_fifo_read(fifo, (void **)output_frame->data, frame_size) &lt; frame_size)&#xA;                        qDebug("Could not read data from FIFO");&#xA;&#xA;                    /** Encode one frame worth of audio samples. */&#xA;                    /** Packet used for temporary storage. */&#xA;                    AVPacket output_packet;&#xA;                    av_init_packet(&amp;output_packet);&#xA;                    output_packet.data = NULL;&#xA;                    output_packet.size = 0;&#xA;&#xA;                    /** Set a timestamp based on the sample rate for the container. */&#xA;                    if (output_frame) nb_samples &#x2B;= output_frame->nb_samples;&#xA;&#xA;                    /**&#xA;                    * Encode the audio frame and store it in the temporary packet.&#xA;                    * The output audio stream encoder is used to do this.&#xA;                    */&#xA;                    if ((ret = avcodec_encode_audio2(out_codec_ctx_a, &amp;output_packet, output_frame, &amp;enc_got_frame_a)) &lt; 0)&#xA;                    {&#xA;                        qDebug("Could not encode frame");&#xA;                        av_packet_unref(&amp;output_packet);&#xA;                    }&#xA;&#xA;                    /** Write one audio frame from the temporary packet to the output file. */&#xA;                    if (enc_got_frame_a)&#xA;                    {&#xA;                        output_packet.stream_index = 1;&#xA;&#xA;                        AVRational time_base = ofmt_ctx->streams[1]->time_base;&#xA;                        AVRational r_framerate1 = { ifmt_ctx_a->streams[audioindex]->codec->sample_rate, 1 };// { 44100, 1};&#xA;                        int64_t calc_duration = (double)(AV_TIME_BASE)*(1 / av_q2d(r_framerate1));&#xA;&#xA;                        output_packet.pts = av_rescale_q(nb_samples*calc_duration, time_base_q, time_base);&#xA;                        output_packet.dts = output_packet.pts;&#xA;                        output_packet.duration = output_frame->nb_samples;&#xA;&#xA;                        //qDebug("audio pts : %d\n", output_packet.pts);&#xA;                        aud_next_pts = nb_samples*calc_duration;&#xA;&#xA;                        int64_t pts_time = av_rescale_q(output_packet.pts, time_base, time_base_q);&#xA;                        int64_t now_time = av_gettime() - start_time;&#xA;                        if ((pts_time > now_time) &amp;&amp; ((aud_next_pts &#x2B; pts_time - now_time)/cleanup&#xA;    if (out_stream_a) avcodec_close(out_stream_a->codec);&#xA;    if (fifo) av_audio_fifo_free(fifo);&#xA;    avio_close(ofmt_ctx->pb);&#xA;    avformat_free_context(ifmt_ctx_a);&#xA;    avformat_free_context(ofmt_ctx);&#xA;}&#xA;</int></int></qpixmap></qdebug></qapplication></vector>

    &#xA;


    &#xA;
    #include "rtmpstream.h"&#xA;#include "global.h"&#xA;&#xA;extern "C" {&#xA;#include "libavformat/avformat.h"&#xA;#include "libavcodec/avcodec.h"&#xA;#include "libavutil/avutil.h"&#xA;#include "libswscale/swscale.h"&#xA;}&#xA;&#xA;#include <qmessagebox>&#xA;#include <qapplication>&#xA;#include <qdebug>&#xA;&#xA;void initialize_avformat_context(AVFormatContext *&amp;fctx, const char *format_name)&#xA;{&#xA;  int ret = avformat_alloc_output_context2(&amp;fctx, nullptr, format_name, nullptr);&#xA;  if (ret != 0)&#xA;  {&#xA;    qDebug("Could not allocate output format context!");&#xA;    QApplication::quit();&#xA;  }&#xA;}&#xA;&#xA;void initialize_io_context(AVFormatContext *&amp;fctx, const char *output)&#xA;{&#xA;  if (!(fctx->oformat->flags &amp; AVFMT_NOFILE))&#xA;  {&#xA;    int ret = avio_open2(&amp;fctx->pb, output, AVIO_FLAG_WRITE, nullptr, nullptr);&#xA;    if (ret &lt; 0)&#xA;    {&#xA;      qDebug("Could not open output IO context!");&#xA;      QApplication::quit();&#xA;    }&#xA;  }&#xA;}&#xA;&#xA;void set_codec_params(AVFormatContext *fctx, AVCodecContext *codec_ctx, double width, double height, int fps, int bitrate)&#xA;{&#xA;  const AVRational dst_fps = {fps, 1};&#xA;  codec_ctx->codec_tag = 0;&#xA;  codec_ctx->codec_id = AV_CODEC_ID_FLV1;&#xA;  codec_ctx->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;  codec_ctx->width = width;&#xA;  codec_ctx->height = height;&#xA;  codec_ctx->gop_size = 12;&#xA;  codec_ctx->pix_fmt = AV_PIX_FMT_YUV420P;&#xA;  codec_ctx->framerate = dst_fps;&#xA;  codec_ctx->time_base = av_inv_q(dst_fps);&#xA;  codec_ctx->bit_rate = bitrate;&#xA;  codec_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;}&#xA;&#xA;void initialize_codec_stream(AVStream *stream, AVCodecContext *codec_ctx, AVCodec *codec, std::string codec_profile)&#xA;{&#xA;  int ret = avcodec_parameters_from_context(stream->codecpar, codec_ctx);&#xA;  if (ret &lt; 0)&#xA;  {&#xA;    qDebug("Could not initialize stream codec parameters!");&#xA;    QApplication::quit();&#xA;  }&#xA;&#xA;  // open video encoder&#xA;  ret = avcodec_open2(codec_ctx, codec, 0);&#xA;  if (ret &lt; 0)&#xA;  {&#xA;    qDebug("Could not open video encoder!");&#xA;    QApplication::quit();&#xA;  }&#xA;}&#xA;&#xA;&#xA;SwsContext *initialize_sample_scaler(AVCodecContext *codec_ctx, double width, double height)&#xA;{&#xA;  SwsContext *swsctx = sws_getContext(width, height, AV_PIX_FMT_BGR24, width, height, codec_ctx->pix_fmt, SWS_BICUBIC, nullptr, nullptr, nullptr);&#xA;  if (!swsctx)&#xA;  {&#xA;    qDebug("Could not initialize sample scaler!");&#xA;    QApplication::quit();&#xA;  }&#xA;&#xA;  return swsctx;&#xA;}&#xA;&#xA;AVFrame *allocate_frame_buffer(AVCodecContext *codec_ctx, double width, double height)&#xA;{&#xA;  AVFrame *frame = av_frame_alloc();&#xA;&#xA;  std::vector framebuf(av_image_get_buffer_size(codec_ctx->pix_fmt, width, height, 1));&#xA;  av_image_fill_arrays(frame->data, frame->linesize, framebuf.data(), codec_ctx->pix_fmt, width, height, 1);&#xA;  frame->width = width;&#xA;  frame->height = height;&#xA;  frame->format = static_cast<int>(codec_ctx->pix_fmt);&#xA;&#xA;  return frame;&#xA;}&#xA;&#xA;void write_frame(AVCodecContext *codec_ctx, AVFormatContext *fmt_ctx, AVStream *st, AVFrame *frame)&#xA;{&#xA;  AVPacket pkt = {0};&#xA;  av_init_packet(&amp;pkt);&#xA;&#xA;  int ret = avcodec_send_frame(codec_ctx, frame);&#xA;  if (ret &lt; 0)&#xA;  {&#xA;    qDebug("Error sending frame to codec context!");&#xA;    QApplication::quit();&#xA;  }&#xA;&#xA;  ret = avcodec_receive_packet(codec_ctx, &amp;pkt);&#xA;  if (ret &lt; 0)&#xA;  {&#xA;    qDebug("Error receiving packet from codec context!");&#xA;    QApplication::quit();&#xA;  }&#xA;&#xA;  /* rescale output packet timestamp values from codec to stream timebase */&#xA;  av_packet_rescale_ts(&amp;pkt, codec_ctx->time_base, st->time_base);&#xA;  pkt.stream_index = st->index;&#xA;&#xA;  av_interleaved_write_frame(fmt_ctx, &amp;pkt);&#xA;  av_packet_unref(&amp;pkt);&#xA;}&#xA;&#xA;int flush_encoder_a(AVFormatContext *ifmt_ctx_a, AVFormatContext *ofmt_ctx, unsigned int stream_index, int nb_samples)&#xA;{&#xA;    int ret;&#xA;    int got_frame;&#xA;    AVPacket enc_pkt;&#xA;    if (!(ofmt_ctx->streams[stream_index]->codec->codec->capabilities &amp; AV_CODEC_CAP_DELAY)) return 0;&#xA;    while (1)&#xA;    {&#xA;        enc_pkt.data = NULL;&#xA;        enc_pkt.size = 0;&#xA;        av_init_packet(&amp;enc_pkt);&#xA;        ret = avcodec_encode_audio2(ofmt_ctx->streams[stream_index]->codec, &amp;enc_pkt, NULL, &amp;got_frame);&#xA;        av_frame_free(NULL);&#xA;&#xA;        if (ret &lt; 0) break;&#xA;        if (!got_frame)&#xA;        {&#xA;            ret = 0;&#xA;            break;&#xA;        }&#xA;&#xA;        qDebug("Flush Encoder: Succeed to encode 1 frame!\tsize:%5d\n", enc_pkt.size);&#xA;        nb_samples&#x2B;=1024;&#xA;&#xA;        //Write PTS&#xA;        AVRational time_base = ofmt_ctx->streams[stream_index]->time_base;//{ 1, 1000 };&#xA;        AVRational r_framerate1 = { ifmt_ctx_a->streams[0]->codec->sample_rate, 1 };&#xA;        AVRational time_base_q = { 1, AV_TIME_BASE };&#xA;&#xA;        //Duration between 2 frames (us)&#xA;        int64_t calc_duration = (double)(AV_TIME_BASE)*(1 / av_q2d(r_framerate1));&#xA;&#xA;        //Parameters&#xA;        enc_pkt.pts = av_rescale_q(nb_samples*calc_duration, time_base_q, time_base);&#xA;        enc_pkt.dts = enc_pkt.pts;&#xA;        enc_pkt.duration = 1024;&#xA;&#xA;        /* copy packet*/&#xA;        //Convert PTS/DTS&#xA;        enc_pkt.pos = -1;&#xA;&#xA;        //ofmt_ctx->duration = enc_pkt.duration * nb_samples;&#xA;&#xA;        /* mux encoded frame */&#xA;        ret = av_interleaved_write_frame(ofmt_ctx, &amp;enc_pkt);&#xA;        if (ret &lt; 0) break;&#xA;    }&#xA;    return ret;&#xA;}&#xA;</int></qdebug></qapplication></qmessagebox>

    &#xA;

  • Merge commit ’12004a9a7f20e44f4da2ee6c372d5e1794c8d6c5’

    20 mars 2017, par Clément Bœsch
    Merge commit ’12004a9a7f20e44f4da2ee6c372d5e1794c8d6c5’
    

    * commit ’12004a9a7f20e44f4da2ee6c372d5e1794c8d6c5’ :
    audiodsp/x86 : yasmify vector_clipf_sse
    audiodsp : reorder arguments for vector_clipf

    Merged the version from Libav after a discussion with James Almer on
    IRC :

    19:22 <ubitux> jamrial : opinion on 12004a9a7f20e44f4da2ee6c372d5e1794c8d6c5 ?
    19:23 <ubitux> it was apparently yasmified differently
    19:23 <ubitux> (it depends on the previous commit arg shuffle)
    19:24 <ubitux> i don’t see the magic movsxdifnidn in your port btw
    19:24 <ubitux> it’s a port from 1d36defe94c7d7ebf995d4dbb4f878d06272f9c6
    19:25 <jamrial> seems better thanks to said arg shuffle
    19:25 <jamrial> the loop is the same, but init is simpler
    19:25 <jamrial> probably worth merging
    19:25 <ubitux> OK
    19:25 <ubitux> thanks
    19:26 <jamrial> curious they didn’t make len ptrdiff_t after the previous bunch of commits, heh
    19:26 <ubitux> yeah indeed

    Both commits are merged at the same time to prevent a conflict with our
    existing yasmified ff_vector_clipf_sse.

    Merged-by : Clément Bœsch <u@pkh.me>

    • [DH] libavcodec/ac3enc_float.c
    • [DH] libavcodec/arm/audiodsp_init_neon.c
    • [DH] libavcodec/arm/audiodsp_neon.S
    • [DH] libavcodec/audiodsp.c
    • [DH] libavcodec/audiodsp.h
    • [DH] libavcodec/cook.c
    • [DH] libavcodec/x86/audiodsp.asm
    • [DH] libavcodec/x86/audiodsp_init.c
    • [DH] tests/checkasm/audiodsp.c