Recherche avancée

Médias (16)

Mot : - Tags -/mp3

Autres articles (32)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (7618)

  • Revision 7621c779e5 : Add palette coding mode for UV For 444 videos, a single palette of 3-d colors i

    22 février 2015, par hui su

    Changed Paths :
     Modify /vp9/common/vp9_blockd.h


     Modify /vp9/common/vp9_entropymode.c


     Modify /vp9/common/vp9_entropymode.h


     Modify /vp9/common/vp9_onyxc_int.h


     Modify /vp9/common/vp9_palette.c


     Modify /vp9/common/vp9_reconintra.c


     Modify /vp9/decoder/vp9_decodemv.c


     Modify /vp9/encoder/vp9_bitstream.c


     Modify /vp9/encoder/vp9_context_tree.c


     Modify /vp9/encoder/vp9_context_tree.h


     Modify /vp9/encoder/vp9_encodeframe.c


     Modify /vp9/encoder/vp9_rdopt.c



    Add palette coding mode for UV

    For 444 videos, a single palette of 3-d colors is
    generated for YUV. For 420 videos, there may be two
    palettes, one for Y, and the other for UV.

    Also fixed a bug when palette and tx-skip are both on.

    on derflr
    — enable-palette +0.00%
    with all experiments +5.87% (was +5.93%)

    on screen_content
    — enable-palette +6.00%
    — enable-palette —enable-tx_skip +15.3%

    on screen_content 444 version
    — enable-palette +6.76%
    — enable-palette —enable-tx_skip +19.5%

    Change-Id : I7287090aecc90eebcd4335d132a8c2c3895dfdd4

  • FFMPEG libav gdigrab capturing with wrong colors

    7 mars 2018, par user1496491

    I’m capturing screen with code below, and it gets me the picture with wrong colors.

    Screenshot

    The picture on left is raw data which I assumed in ARGB the picture in right is encoded as YUV. I’ve tried different formats, the pictures slighly changing, but it’s never looks ow it should be. In what format gdigrab gives its output ? What’s the right way to encode it ?

    #include "MainWindow.h"

    #include <qguiapplication>
    #include <qlabel>
    #include <qscreen>
    #include <qtimer>
    #include <qlayout>
    #include <qimage>
    #include <qtconcurrent></qtconcurrent>QtConcurrent>
    #include <qthreadpool>

    #include "ScreenCapture.h"

    MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent)
    {
       resize(800, 600);

       label = new QLabel();
       label->setAlignment(Qt::AlignHCenter | Qt::AlignVCenter);

       auto layout = new QHBoxLayout();
       layout->addWidget(label);

       auto widget = new QWidget();
       widget->setLayout(layout);
       setCentralWidget(widget);

       init();
       initOutFile();
       collectFrame();
    }

    MainWindow::~MainWindow()
    {
       avformat_close_input(&amp;inputFormatContext);
       avformat_free_context(inputFormatContext);

       QThreadPool::globalInstance()->waitForDone();
    }

    void MainWindow::init()
    {

       av_register_all();
       avcodec_register_all();
       avdevice_register_all();

       auto screen = QGuiApplication::screens()[1];
       QRect geometry = screen->geometry();

       inputFormatContext = avformat_alloc_context();

       AVDictionary* options = NULL;
       av_dict_set(&amp;options, "framerate", "30", NULL);
       av_dict_set(&amp;options, "offset_x", QString::number(geometry.x()).toLatin1().data(), NULL);
       av_dict_set(&amp;options, "offset_y", QString::number(geometry.y()).toLatin1().data(), NULL);
       av_dict_set(&amp;options, "preset", "ultrafast", NULL);
       av_dict_set(&amp;options, "probesize", "10MB", NULL);
       av_dict_set(&amp;options, "pix_fmt", "yuv420p", NULL);
       av_dict_set(&amp;options, "video_size", QString(QString::number(geometry.width()) + "x" + QString::number(geometry.height())).toLatin1().data(), NULL);

       AVInputFormat* inputFormat = av_find_input_format("gdigrab");
       avformat_open_input(&amp;inputFormatContext, "desktop", inputFormat, &amp;options);

    //    AVDictionary* options = NULL;
    //    av_dict_set(&amp;options, "framerate", "30", NULL);
    //    av_dict_set(&amp;options, "preset", "ultrafast", NULL);
    //    av_dict_set(&amp;options, "vcodec", "h264", NULL);
    //    av_dict_set(&amp;options, "s", "1280x720", NULL);
    //    av_dict_set(&amp;options, "crf", "0", NULL);
    //    av_dict_set(&amp;options, "rtbufsize", "100M", NULL);

    //    AVInputFormat *format = av_find_input_format("dshow");
    //    avformat_open_input(&amp;inputFormatContext, "video=screen-capture-recorder", format, &amp;options);

       av_dict_free(&amp;options);
       avformat_find_stream_info(inputFormatContext, NULL);

       videoStreamIndex = av_find_best_stream(inputFormatContext, AVMEDIA_TYPE_VIDEO, -1, -1, NULL, 0);

       inputCodec = avcodec_find_decoder(inputFormatContext->streams[videoStreamIndex]->codecpar->codec_id);
       if(!inputCodec) qDebug() &lt;&lt; "Не найден кодек входящего потока!";

       inputCodecContext = avcodec_alloc_context3(inputCodec);
       inputCodecContext->codec_id = inputCodec->id;

       avcodec_parameters_to_context(inputCodecContext, inputFormatContext->streams[videoStreamIndex]->codecpar);

       if(avcodec_open2(inputCodecContext, inputCodec, NULL)) qDebug() &lt;&lt; "Не удалось открыть входной кодек!";
    }

    void MainWindow::initOutFile()
    {
       const char* filename = "C:/Temp/output.mp4";

       if(avformat_alloc_output_context2(&amp;outFormatContext, NULL, NULL, filename) &lt; 0) qDebug() &lt;&lt; "Не удалось создать выходной контекст!";

       outCodec = avcodec_find_encoder(AV_CODEC_ID_MPEG4);
       if(!outCodec) qDebug() &lt;&lt; "Не удалось найти кодек!";

       videoStream = avformat_new_stream(outFormatContext, outCodec);
       videoStream->time_base = {1, 30};

       const AVPixelFormat* pixelFormat = outCodec->pix_fmts;
       while (*pixelFormat != AV_PIX_FMT_NONE)
       {
           qDebug() &lt;&lt; "OUT_FORMAT" &lt;&lt; av_get_pix_fmt_name(*pixelFormat);
           ++pixelFormat;
       }

       outCodecContext = videoStream->codec;
       outCodecContext->bit_rate = 400000;
       outCodecContext->width = inputCodecContext->width;
       outCodecContext->height = inputCodecContext->height;
       outCodecContext->time_base = videoStream->time_base;
       outCodecContext->gop_size = 10;
       outCodecContext->max_b_frames = 1;
       outCodecContext->pix_fmt = AV_PIX_FMT_YUV420P;

       if (outFormatContext->oformat->flags &amp; AVFMT_GLOBALHEADER) outCodecContext->flags |= CODEC_FLAG_GLOBAL_HEADER;

       if(avcodec_open2(outCodecContext, outCodec, NULL)) qDebug() &lt;&lt; "Не удалось открыть выходной кодек!";

       swsContext = sws_getContext(inputCodecContext->width,
                                   inputCodecContext->height,
    //                                inputCodecContext->pix_fmt,
                                   AV_PIX_FMT_ABGR,
                                   outCodecContext->width,
                                   outCodecContext->height,
                                   outCodecContext->pix_fmt,
                                   SWS_BICUBIC, NULL, NULL, NULL);

       if(avio_open(&amp;outFormatContext->pb, filename, AVIO_FLAG_WRITE) &lt; 0) qDebug() &lt;&lt; "Не удалось открыть файл!";
       if(avformat_write_header(outFormatContext, NULL) &lt; 0) qDebug() &lt;&lt; "Не удалось записать заголовок!";
    }

    void MainWindow::collectFrame()
    {
       AVFrame* inFrame = av_frame_alloc();
       inFrame->format = inputCodecContext->pix_fmt;
       inFrame->width = inputCodecContext->width;
       inFrame->height = inputCodecContext->height;

       int size = av_image_alloc(inFrame->data, inFrame->linesize, inFrame->width, inFrame->height, inputCodecContext->pix_fmt, 1);
       qDebug() &lt;&lt; size;

       AVFrame* outFrame = av_frame_alloc();
       outFrame->format = outCodecContext->pix_fmt;
       outFrame->width = outCodecContext->width;
       outFrame->height = outCodecContext->height;

       qDebug() &lt;&lt; av_image_alloc(outFrame->data, outFrame->linesize, outFrame->width, outFrame->height, outCodecContext->pix_fmt, 1);

       AVPacket packet;
       av_init_packet(&amp;packet);

       av_read_frame(inputFormatContext, &amp;packet);
    //    while(av_read_frame(inputFormatContext, &amp;packet) >= 0)
    //    {
           if(packet.stream_index == videoStream->index)
           {

               memcpy(inFrame->data[0], packet.data, size);

               sws_scale(swsContext, inFrame->data, inFrame->linesize, 0, inputCodecContext->height, outFrame->data, outFrame->linesize);

               QImage image(inFrame->data[0], inFrame->width, inFrame->height, QImage::Format_ARGB32);
               label->setPixmap(QPixmap::fromImage(image).scaled(label->size(), Qt::KeepAspectRatio));

               AVPacket outPacket;
               av_init_packet(&amp;outPacket);

               int encodeResult = avcodec_receive_packet(outCodecContext, &amp;outPacket);
               while(encodeResult == AVERROR(EAGAIN))
               {
                   if(avcodec_send_frame(outCodecContext, outFrame)) qDebug() &lt;&lt; "Ошибка отправки фрейма на кодирование!";

                   encodeResult = avcodec_receive_packet(outCodecContext, &amp;outPacket);
               }
               if(encodeResult != 0) qDebug() &lt;&lt; "Ошибка во время кодирования!" &lt;&lt; encodeResult;

               if(outPacket.pts != AV_NOPTS_VALUE) outPacket.pts = av_rescale_q(outPacket.pts, videoStream->codec->time_base, videoStream->time_base);
               if(outPacket.dts != AV_NOPTS_VALUE) outPacket.dts = av_rescale_q(outPacket.dts, videoStream->codec->time_base, videoStream->time_base);

               av_write_frame(outFormatContext, &amp;outPacket);

               av_packet_unref(&amp;outPacket);
           }
    //    }

       av_packet_unref(&amp;packet);

       av_write_trailer(outFormatContext);
       avio_close(outFormatContext->pb);
    }
    </qthreadpool></qimage></qlayout></qtimer></qscreen></qlabel></qguiapplication>
  • Using hex colors with ffmpeg's showwaves

    25 août 2017, par Dan

    I’ve been trying to create a video with ffmpeg’s showwaves filter and have cobbled together the below command which I sort of understand. I’m wondering if it is possible to set the color of the wav form using hex colors. (i.e. #F3ECDA instead of "blue")
    Also, feel free to tell me if there’s any unneeded garbage in the command as is. Thanks.

    ffmpeg -i audio.mp3 -loop 1 -i picture.jpg -filter_complex \
    "[0:a]showwaves=s=960x202:mode=cline:colors=blue[fg]; \
    [1:v]scale=960:-1,crop=iw:540[bg]; \
    [bg][fg]overlay=shortest=1:main_h-overlay_h-30,format=yuv420p[out]" \
    -map "[out]" -map 0:a -c:v libx264 -preset fast -crf 18 -c:a libopus output.col.mkv