Recherche avancée

Médias (1)

Mot : - Tags -/ogg

Autres articles (79)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

Sur d’autres sites (6106)

  • Libav (ffmpeg) copying decoded video timestamps to encoder

    31 octobre 2016, par Jason C

    I am writing an application that decodes a single video stream from an input file (any codec, any container), does a bunch of image processing, and encodes the results to an output file (single video stream, Quicktime RLE, MOV). I am using ffmpeg’s libav 3.1.5 (Windows build for now, but the application will be cross-platform).

    There is a 1:1 correspondence between input and output frames and I want the frame timing in the output to be identical to the input. I am having a really, really hard time accomplishing this. So my general question is : How do I reliably (as in, in all cases of inputs) set the output frame timing identical to the input ?

    It took me a very long time to slog through the API and get to the point I am at now. I put together a minimal test program to work with :

    #include <cstdio>

    extern "C" {
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libavutil></libavutil>avutil.h>
    #include <libavutil></libavutil>imgutils.h>
    #include <libswscale></libswscale>swscale.h>
    }

    using namespace std;


    struct DecoderStuff {
       AVFormatContext *formatx;
       int nstream;
       AVCodec *codec;
       AVStream *stream;
       AVCodecContext *codecx;
       AVFrame *rawframe;
       AVFrame *rgbframe;
       SwsContext *swsx;
    };


    struct EncoderStuff {
       AVFormatContext *formatx;
       AVCodec *codec;
       AVStream *stream;
       AVCodecContext *codecx;
    };


    template <typename t="t">
    static void dump_timebase (const char *what, const T *o) {
       if (o)
           printf("%s timebase: %d/%d\n", what, o->time_base.num, o->time_base.den);
       else
           printf("%s timebase: null object\n", what);
    }


    // reads next frame into d.rawframe and d.rgbframe. returns false on error/eof.
    static bool read_frame (DecoderStuff &amp;d) {

       AVPacket packet;
       int err = 0, haveframe = 0;

       // read
       while (!haveframe &amp;&amp; err >= 0 &amp;&amp; ((err = av_read_frame(d.formatx, &amp;packet)) >= 0)) {
          if (packet.stream_index == d.nstream) {
              err = avcodec_decode_video2(d.codecx, d.rawframe, &amp;haveframe, &amp;packet);
          }
          av_packet_unref(&amp;packet);
       }

       // error output
       if (!haveframe &amp;&amp; err != AVERROR_EOF) {
           char buf[500];
           av_strerror(err, buf, sizeof(buf) - 1);
           buf[499] = 0;
           printf("read_frame: %s\n", buf);
       }

       // convert to rgb
       if (haveframe) {
           sws_scale(d.swsx, d.rawframe->data, d.rawframe->linesize, 0, d.rawframe->height,
                     d.rgbframe->data, d.rgbframe->linesize);
       }

       return haveframe;

    }


    // writes an output frame, returns false on error.
    static bool write_frame (EncoderStuff &amp;e, AVFrame *inframe) {

       // see note in so post about outframe here
       AVFrame *outframe = av_frame_alloc();
       outframe->format = inframe->format;
       outframe->width = inframe->width;
       outframe->height = inframe->height;
       av_image_alloc(outframe->data, outframe->linesize, outframe->width, outframe->height,
                      AV_PIX_FMT_RGB24, 1);
       //av_frame_copy(outframe, inframe);
       static int count = 0;
       for (int n = 0; n &lt; outframe->width * outframe->height; ++ n) {
           outframe->data[0][n*3+0] = ((n+count) % 100) ? 0 : 255;
           outframe->data[0][n*3+1] = ((n+count) % 100) ? 0 : 255;
           outframe->data[0][n*3+2] = ((n+count) % 100) ? 0 : 255;
       }
       ++ count;

       AVPacket packet;
       av_init_packet(&amp;packet);
       packet.size = 0;
       packet.data = NULL;

       int err, havepacket = 0;
       if ((err = avcodec_encode_video2(e.codecx, &amp;packet, outframe, &amp;havepacket)) >= 0 &amp;&amp; havepacket) {
           packet.stream_index = e.stream->index;
           err = av_interleaved_write_frame(e.formatx, &amp;packet);
       }

       if (err &lt; 0) {
           char buf[500];
           av_strerror(err, buf, sizeof(buf) - 1);
           buf[499] = 0;
           printf("write_frame: %s\n", buf);
       }

       av_packet_unref(&amp;packet);
       av_freep(&amp;outframe->data[0]);
       av_frame_free(&amp;outframe);

       return err >= 0;

    }


    int main (int argc, char *argv[]) {

       const char *infile = "wildlife.wmv";
       const char *outfile = "test.mov";
       DecoderStuff d = {};
       EncoderStuff e = {};

       av_register_all();

       // decoder
       avformat_open_input(&amp;d.formatx, infile, NULL, NULL);
       avformat_find_stream_info(d.formatx, NULL);
       d.nstream = av_find_best_stream(d.formatx, AVMEDIA_TYPE_VIDEO, -1, -1, &amp;d.codec, 0);
       d.stream = d.formatx->streams[d.nstream];
       d.codecx = avcodec_alloc_context3(d.codec);
       avcodec_parameters_to_context(d.codecx, d.stream->codecpar);
       avcodec_open2(d.codecx, NULL, NULL);
       d.rawframe = av_frame_alloc();
       d.rgbframe = av_frame_alloc();
       d.rgbframe->format = AV_PIX_FMT_RGB24;
       d.rgbframe->width = d.codecx->width;
       d.rgbframe->height = d.codecx->height;
       av_frame_get_buffer(d.rgbframe, 1);
       d.swsx = sws_getContext(d.codecx->width, d.codecx->height, d.codecx->pix_fmt,
                               d.codecx->width, d.codecx->height, AV_PIX_FMT_RGB24,
                               SWS_POINT, NULL, NULL, NULL);
       //av_dump_format(d.formatx, 0, infile, 0);
       dump_timebase("in stream", d.stream);
       dump_timebase("in stream:codec", d.stream->codec); // note: deprecated
       dump_timebase("in codec", d.codecx);

       // encoder
       avformat_alloc_output_context2(&amp;e.formatx, NULL, NULL, outfile);
       e.codec = avcodec_find_encoder(AV_CODEC_ID_QTRLE);
       e.stream = avformat_new_stream(e.formatx, e.codec);
       e.codecx = avcodec_alloc_context3(e.codec);
       e.codecx->bit_rate = 4000000; // arbitrary for qtrle
       e.codecx->width = d.codecx->width;
       e.codecx->height = d.codecx->height;
       e.codecx->gop_size = 30; // 99% sure this is arbitrary for qtrle
       e.codecx->pix_fmt = AV_PIX_FMT_RGB24;
       e.codecx->time_base = d.stream->time_base; // ???
       e.codecx->flags |= (e.formatx->flags &amp; AVFMT_GLOBALHEADER) ? AV_CODEC_FLAG_GLOBAL_HEADER : 0;
       avcodec_open2(e.codecx, NULL, NULL);
       avcodec_parameters_from_context(e.stream->codecpar, e.codecx);
       //av_dump_format(e.formatx, 0, outfile, 1);
       dump_timebase("out stream", e.stream);
       dump_timebase("out stream:codec", e.stream->codec); // note: deprecated
       dump_timebase("out codec", e.codecx);

       // open file and write header
       avio_open(&amp;e.formatx->pb, outfile, AVIO_FLAG_WRITE);
       avformat_write_header(e.formatx, NULL);

       // frames
       while (read_frame(d) &amp;&amp; write_frame(e, d.rgbframe))
           ;

       // write trailer and close file
       av_write_trailer(e.formatx);
       avio_closep(&amp;e.formatx->pb);

    }
    </typename></cstdio>

    A few notes about that :

    • Since all of my attempts at frame timing so far have failed, I’ve removed almost all timing-related stuff from this code to start with a clean slate.
    • Almost all error checking and cleanup omitted for brevity.
    • The reason I allocate a new output frame with a new buffer in write_frame, rather than using inframe directly, is because this is more representative of what my real application is doing. My real app also uses RGB24 internally, hence the conversions here.
    • The reason I generate a weird pattern in outframe, rather than using e.g. av_copy_frame, is because I just wanted a test pattern that compressed well with Quicktime RLE (my test input ends up generating a 1.7GB output file otherwise).
    • The input video I am using, "wildlife.wmv", can be found here. I’ve hard-coded the filenames.
    • I am aware that avcodec_decode_video2 and avcodec_encode_video2 are deprecated, but don’t care. They work fine, I’ve already struggled too much getting my head around the latest version of the API, ffmpeg changes their API with nearly every release, and I really don’t feel like dealing with avcodec_send_* and avcodec_receive_* right now.
    • I think I’m supposed to be finishing off by passing a NULL frame to avcodec_encode_video2 to flush some buffers or something but I’m a bit confused about that. Unless somebody feels like explaining that let’s ignore it for now, it’s a separate question. The docs are as vague about this point as they are about everything else.
    • My test input file’s frame rate is 29.97.

    Now, as for my current attempts. The following timing related fields are present in the above code, with details/confusion in bold. There’s a lot of them, because the API is mind-bogglingly convoluted :

    • main: d.stream->time_base : Input video stream time base. For my test input file this is 1/1000.
    • main: d.stream->codec->time_base : Not sure what this is (I never could make sense of why AVStream has an AVCodecContext field when you always use your own new context anyways) and also the codec field is deprecated. For my test input file this is 1/1000.
    • main: d.codecx->time_base : Input codec context time-base. For my test input file this is 0/1. Am I supposed to set it ?
    • main: e.stream->time_base : Time base of the output stream I create. What do I set this to ?
    • main: e.stream->codec->time_base : Time base of the deprecated and mysterious codec field of the output stream I create. Do I set this to anything ?
    • main: e.codecx->time_base : Time base of the encoder context I create. What do I set this to ?
    • read_frame: packet.dts : Decoding timestamp of packet read.
    • read_frame: packet.pts : Presentation timestamp of packet read.
    • read_frame: packet.duration : Duration of packet read.
    • read_frame: d.rawframe->pts : Presentation timestamp of raw frame decoded. This is always 0. Why isn’t it read by the decoder...?
    • read_frame: d.rgbframe->pts / write_frame: inframe->pts : Presentation timestamp of decoded frame converted to RGB. Not set to anything currently.
    • read_frame: d.rawframe->pkt_* : Fields copied from packet, discovered after reading this post. They are set correctly but I don’t know if they are useful.
    • write_frame: outframe->pts : Presentation timestamp of frame being encoded. Should I set this to something ?
    • write_frame: outframe->pkt_* : Timing fields from a packet. Should I set these ? They seem to be ignored by the encoder.
    • write_frame: packet.dts : Decoding timestamp of packet being encoded. What do I set it to ?
    • write_frame: packet.pts : Presentation timestamp of packet being encoded. What do I set it to ?
    • write_frame: packet.duration : Duration of packet being encoded. What do I set it to ?

    I have tried the following, with the described results. Note that inframe is d.rgbframe :

    1.  
      • Init e.stream->time_base = d.stream->time_base
      • Init e.codecx->time_base = d.codecx->time_base
      • Set d.rgbframe->pts = packet.dts in read_frame
      • Set outframe->pts = inframe->pts in write_frame
      • Result : Warning that encoder time base is not set (since d.codecx->time_base was 0/1), seg fault.
    2.  
      • Init e.stream->time_base = d.stream->time_base
      • Init e.codecx->time_base = d.stream->time_base
      • Set d.rgbframe->pts = packet.dts in read_frame
      • Set outframe->pts = inframe->pts in write_frame
      • Result : No warnings, but VLC reports frame rate as 480.048 (no idea where this number came from) and file plays too fast. Also the encoder sets all the timing fields in packet to 0, which was not what I expected. (Edit : Turns out this is because av_interleaved_write_frame, unlike av_write_frame, takes ownership of the packet and swaps it with a blank one, and I was printing the values after that call. So they are not ignored.)
    3.  
      • Init e.stream->time_base = d.stream->time_base
      • Init e.codecx->time_base = d.stream->time_base
      • Set d.rgbframe->pts = packet.dts in read_frame
      • Set any of pts/dts/duration in packet in write_frame to anything.
      • Result : Warnings about packet timestamps not set. Encoder seems to reset all packet timing fields to 0, so none of this has any effect.
    4.  
      • Init e.stream->time_base = d.stream->time_base
      • Init e.codecx->time_base = d.stream->time_base
      • I found these fields, pkt_pts, pkt_dts, and pkt_duration in AVFrame after reading this post, so I tried copying those all the way through to outframe.
      • Result : Really had my hopes up, but ended up with same results as attempt 3 (packet timestamp not set warning, incorrect results).

    I tried various other hand-wavey permutations of the above and nothing worked. What I want to do is create an output file that plays back with the same timing and frame rate as the input (29.97 constant frame rate in this case).

    So how do I do this ? Of the zillions of timing related fields here, what do I do to make the output be the same as the input ? And how do I do it in such a way that handles arbitrary video input formats that may store their time stamps and time bases in different places ? I need this to always work.


    For reference, here is a table of all the packet and frame timestamps read from the video stream of my test input file, to give a sense of what my test file looks like. None of the input packet pts’ are set, same with frame pts, and for some reason the duration of the first 108 frames is 0. VLC plays the file fine and reports the frame rate as 29.9700089 :

  • FFmpeg image disappears in merged video

    10 novembre 2016, par utdev

    I am putting an image above a video like this :

    ffmpeg -i background.mpg -i image.png -filter_complex "[0:v][1:v]
    overlay=25:25:enable=’between(t,0,20)’" -vcodec libx264 -crf 25
    -pix_fmt yuv420p -t 30 -c:a copy newBackground.mpg

    The issue is that background.mpg has a duration of 30 seconds, but if I "merge" the image with the video, the images disappears after ~ 20 seconds, but the background still plays up to 30 seconds (without the images), why does this happen and how do I solve this issue ?

  • libavcodec : ffprobe on file encoded with FFV1 codec reports "read_quant_table error"

    20 novembre 2016, par Ali Alidoust

    I’m using the following code to encode a series of frames into an mkv or avi file with FFV1 encoding :

      HRESULT Session::createContext(LPCSTR filename, UINT width, UINT height, UINT fps_num, UINT fps_den) {
       LOG("Exporting to file: ", filename);

       AVCodecID codecId = AV_CODEC_ID_FFV1;
       this->pixelFormat = AV_PIX_FMT_YUV420P;

       this->codec = avcodec_find_encoder(codecId);
       RET_IF_NULL(this->codec, "Could not create codec", E_FAIL);

       this->oformat = av_guess_format(NULL, filename, NULL);
       RET_IF_NULL(this->oformat, "Could not create format", E_FAIL);
       this->oformat->video_codec = codecId;
       this->width = width;
       this->height = height;
       this->codecContext = avcodec_alloc_context3(this->codec);
       RET_IF_NULL(this->codecContext, "Could not allocate context for the codec", E_FAIL);

       this->codecContext->codec = this->codec;
       this->codecContext->codec_id = codecId;

       this->codecContext->pix_fmt = pixelFormat;
       this->codecContext->width = this->width;
       this->codecContext->height = this->height;
       this->codecContext->time_base.num = fps_den;
       this->codecContext->time_base.den = fps_num;

       this->codecContext->gop_size = 1;


       RET_IF_FAILED_AV(avformat_alloc_output_context2(&amp;fmtContext, this->oformat, NULL, NULL), "Could not allocate format context", E_FAIL);
       RET_IF_NULL(this->fmtContext, "Could not allocate format context", E_FAIL);

       this->fmtContext->oformat = this->oformat;
       this->fmtContext->video_codec_id = codecId;

       this->stream = avformat_new_stream(this->fmtContext, this->codec);
       RET_IF_NULL(this->stream, "Could not create new stream", E_FAIL);
       this->stream->time_base = this->codecContext->time_base;
       RET_IF_FAILED_AV(avcodec_parameters_from_context(this->stream->codecpar, this->codecContext), "Could not convert AVCodecContext to AVParameters", E_FAIL);

       if (this->fmtContext->oformat->flags &amp; AVFMT_GLOBALHEADER)
       {
           this->codecContext->flags |= CODEC_FLAG_GLOBAL_HEADER;
       }

       av_opt_set_int(this->codecContext->priv_data, "coder", 0, 0);
       av_opt_set_int(this->codecContext->priv_data, "context", 1, 0);
       av_opt_set_int(this->codecContext->priv_data, "slicecrc", 1, 0);
       //av_opt_set_int(this->codecContext->priv_data, "slicecrc", 1, 0);
       //av_opt_set_int(this->codecContext->priv_data, "pix_fmt", pixelFormat, 0);

       RET_IF_FAILED_AV(avcodec_open2(this->codecContext, this->codec, NULL), "Could not open codec", E_FAIL);
       RET_IF_FAILED_AV(avio_open(&amp;this->fmtContext->pb, filename, AVIO_FLAG_WRITE), "Could not open output file", E_FAIL);
       RET_IF_NULL(this->fmtContext->pb, "Could not open output file", E_FAIL);
       RET_IF_FAILED_AV(avformat_write_header(this->fmtContext, NULL), "Could not write header", E_FAIL);

       frame = av_frame_alloc();
       RET_IF_NULL(frame, "Could not allocate frame", E_FAIL);
       frame->format = this->codecContext->pix_fmt;
       frame->width = width;
       frame->height = height;
       return S_OK;
    }

    HRESULT Session::writeFrame(IMFSample * pSample) {
       IMFMediaBuffer *mediaBuffer = NULL;
       BYTE *pDataNV12 = NULL;
       DWORD length;

       RET_IF_FAILED(pSample->ConvertToContiguousBuffer(&amp;mediaBuffer), "Could not convert IMFSample to contagiuous buffer", E_FAIL);
       RET_IF_FAILED(mediaBuffer->GetCurrentLength(&amp;length), "Could not get buffer length", E_FAIL);
       RET_IF_FAILED(mediaBuffer->Lock(&amp;pDataNV12, NULL, NULL), "Could not lock the buffer", E_FAIL);
       BYTE *pDataYUV420P = new BYTE[length];
       this->convertNV12toYUV420P(pDataNV12, pDataYUV420P, this->width, this->height);
       RET_IF_FAILED(av_image_fill_arrays(frame->data, frame->linesize, pDataYUV420P, pixelFormat, this->width, this->height, 1), "Could not fill the frame with data from the buffer", E_FAIL);
       LOG_IF_FAILED(mediaBuffer->Unlock(), "Could not unlock the buffer");

       frame->pts = av_rescale_q(this->pts++, this->codecContext->time_base, this->stream->time_base);

       AVPacket pkt;

       av_init_packet(&amp;pkt);
       pkt.data = NULL;
       pkt.size = 0;

       RET_IF_FAILED_AV(avcodec_send_frame(this->codecContext, frame), "Could not send the frame to the encoder", E_FAIL);
       delete[] pDataYUV420P;
       if (SUCCEEDED(avcodec_receive_packet(this->codecContext, &amp;pkt))) {
           RET_IF_FAILED_AV(av_interleaved_write_frame(this->fmtContext, &amp;pkt), "Could not write the received packet.", E_FAIL);
       }

       av_packet_unref(&amp;pkt);

       return S_OK;
    }

    HRESULT Session::endSession() {
       LOG("Ending session...");

       LOG("Closing files...")
       LOG_IF_FAILED_AV(av_write_trailer(this->fmtContext), "Could not finalize the output file.");
       LOG_IF_FAILED_AV(avio_close(this->fmtContext->pb), "Could not close the output file.");
       LOG_IF_FAILED_AV(avcodec_close(this->codecContext), "Could not close the codec.");
       av_free(this->codecContext);
       LOG("Done.")
       return S_OK;
    }

    The problem is that the generated file is not playable in either VLC or MPC-HC. However, MPC-HC reports following info in file properties :

    General
    Unique ID                      : 202978442142665779317960161865934977227 (0x98B439D9BE859109BD5EC00A62A238CB)
    Complete name                  : T:\Test.mkv
    Format                         : Matroska
    Format version                 : Version 4 / Version 2
    File size                      : 24.6 MiB
    Duration                       : 147ms
    Overall bit rate               : 1 401 Mbps
    Writing application            : Lavf57.57.100
    Writing library                : Lavf57.57.100

    Video
    ID                             : 1
    Format                         : FFV1
    Format version                 : Version 0
    Codec ID                       : V_MS/VFW/FOURCC / FFV1
    Duration                       : 147ms
    Width                          : 1 280 pixels
    Height                         : 720 pixels
    Display aspect ratio           : 16:9
    Frame rate mode                : Constant
    Frame rate                     : 1 000.000 fps
    Color space                    : YUV
    Chroma subsampling             : 4:2:0
    Bit depth                      : 8 bits
    Compression mode               : Lossless
    Default                        : Yes
    Forced                         : No
    DURATION                       : 00:00:00.147000000
    coder_type                     : Golomb Rice

    Something to note is that it reports 1000 FPS which is weird since I’ve set AVCodecContext::time_base in the code.

    UPDATE 1 :

    I managed to set the correct fps by setting time_base property of the stream :

    this->stream->time_base.den = fps_num;
    this->stream->time_base.num = fps_den;

    VLC plays the output file but it shows VLC logo instead of the video, as if there is no video stream in the file.

    UPDATE 2 :

    Cleaned up the code. Now if I set codecId = AV_CODEC_ID_MPEG2VIDEO the output file is valid and is played in both VLC and MPC-HC. Using ffprobe on the file with FFV1 encoding yields the following result :

    C:\root\apps\ffmpeg>ffprobe.exe t:\test.avi
    ffprobe version 3.2 Copyright (c) 2007-2016 the FFmpeg developers
     built with gcc 5.4.0 (GCC)
     configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --disable-w32threads --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-libebur128 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib
     libavutil      55. 34.100 / 55. 34.100
     libavcodec     57. 64.100 / 57. 64.100
     libavformat    57. 56.100 / 57. 56.100
     libavdevice    57.  1.100 / 57.  1.100
     libavfilter     6. 65.100 /  6. 65.100
     libswscale      4.  2.100 /  4.  2.100
     libswresample   2.  3.100 /  2.  3.100
     libpostproc    54.  1.100 / 54.  1.100
    [ffv1 @ 00000000006b83a0] read_quant_table error
    Input #0, avi, from 't:\test.avi':
     Metadata:
       encoder         : Lavf57.56.100
     Duration: 00:00:04.94, start: 0.000000, bitrate: 107005 kb/s
       Stream #0:0: Video: ffv1 (FFV1 / 0x31564646), yuv420p, 1280x720, 107717 kb/s, 29.97 fps, 29.97 tbr, 29.97 tbn, 29.97 tbc