Recherche avancée

Médias (1)

Mot : - Tags -/ogv

Autres articles (42)

  • Demande de création d’un canal

    12 mars 2010, par

    En fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
    Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Taille des images et des logos définissables

    9 février 2011, par

    Dans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
    Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...)

Sur d’autres sites (7943)

  • FFmpeg how to apply "aac_adtstoasc" and "h264_mp4toannexb" bitstream filters to transcode to h.264 with AAC

    9 juillet 2015, par larod

    I’ve been struggling with this issue for about a month, I have studied FFmpeg documentation more specifically transcode_aac.c, transcoding.c, decoding_encoding.c and Handbrake’s implementation which is really dense.

    The error that I’m getting is the following : [mp4 @ 0x11102f800] Malformed AAC bitstream detected: use the audio bitstream filter 'aac_adtstoasc' to fix it ('-bsf:a aac_adtstoasc' option with ffmpeg).

    The research I’ve done points to a filter that needs to be implemented.

    FIX:AAC in some container format (FLV, MP4, MKV etc.) need "aac_adtstoasc" bitstream filter (BSF).

    I know I can do the following :

    AVBitStreamFilterContext* aacbsfc =  av_bitstream_filter_init("aac_adtstoasc");

    And then do something like this :

    av_bitstream_filter_filter(aacbsfc, in_stream->codec, NULL, &pkt.data, &pkt.size, pkt.data, pkt.size, 0);

    What eludes me is when to filter the AVPacket, is it before calling av_packet_rescale_ts or inside init_filter. I would greatly appreciate if someone can point me in the right direction. Thanks in advance.

    // Variables
    AVFormatContext *_ifmt_ctx, *_ofmt_ctx;
    FilteringContext *_filter_ctx;
    AVBitStreamFilterContext *_h264bsfc;
    AVBitStreamFilterContext *_aacbsfc;
    NSURL *_srcURL, *_dstURL;

    - (IBAction)trancode:(id)sender {
           NSLog(@"%s %@",__func__, _mediaFile.fsName);
           int ret, got_frame;
           int (*dec_func)(AVCodecContext *, AVFrame *, int *, const AVPacket *);
           unsigned int stream_index, i;
           enum AVMediaType type;
           AVPacket packet = {.data = NULL, .size = 0};
           AVFrame *frame = NULL;
           _h264bsfc = av_bitstream_filter_init("h264_mp4toannexb");
           _aacbsfc =  av_bitstream_filter_init("aac_adtstoasc");

       _srcURL = [Utility urlFromBookmark:_mediaFile.bookmark];
       if ([_srcURL startAccessingSecurityScopedResource]) {
           NSString *newFileName = [[_srcURL.lastPathComponent stringByDeletingPathExtension]stringByAppendingPathExtension:@"mp4"];
           _dstURL = [NSURL fileURLWithPath:[[_srcURL URLByDeletingLastPathComponent]URLByAppendingPathComponent:newFileName].path isDirectory:NO];

           [AppDelegate ffmpegRegisterAll];

           ret = open_input_file(_srcURL.path.fileSystemRepresentation);
           if (ret < 0) {
               NSLog(@"Error openning input file.");
           }

           ret = open_output_file(_dstURL.path.fileSystemRepresentation);
           if (ret < 0) {
               NSLog(@"Error openning output file.");
           }

           ret = init_filters();
           if (ret < 0) {
               NSLog(@"Error initializing filters.");
           }

           AVBitStreamFilterContext *h264bsfc = av_bitstream_filter_init("h264_mp4toannexb");
           AVBitStreamFilterContext* aacbsfc =  av_bitstream_filter_init("aac_adtstoasc");
           // Transcode *******************************************************************************
           while (1) {
               if ((ret = av_read_frame(_ifmt_ctx, &packet)) < 0) {
                   break;
               }
               stream_index = packet.stream_index;
               type = _ifmt_ctx->streams[packet.stream_index]->codec->codec_type;
               av_log(NULL, AV_LOG_DEBUG, "Demuxer gave frame of stream_index %u\n", stream_index);



               if (_filter_ctx[stream_index].filter_graph) {
                   av_log(NULL, AV_LOG_DEBUG, "Going to reencode&filter the frame\n");
                   frame = av_frame_alloc();
                   if (!frame) {
                       ret = AVERROR(ENOMEM);
                       break;
                   }

                   av_packet_rescale_ts(&packet, _ifmt_ctx->streams[stream_index]->time_base, _ifmt_ctx->streams[stream_index]->codec->time_base);
                   dec_func = (type == AVMEDIA_TYPE_VIDEO) ? avcodec_decode_video2 : avcodec_decode_audio4;
                   ret = dec_func(_ifmt_ctx->streams[stream_index]->codec, frame, &got_frame, &packet);
                   if (ret < 0) {
                       av_frame_free(&frame);
                       av_log(NULL, AV_LOG_ERROR, "Decoding failed\n");
                       break;
                   }

                   if (got_frame) {
                       frame->pts = av_frame_get_best_effort_timestamp(frame);
                       ret = filter_encode_write_frame(frame, stream_index);
                       av_frame_free(&frame);
                       if (ret < 0)
                           goto end;
                   } else {
                       av_frame_free(&frame);
                   }
               } else {
                   /* remux this frame without reencoding */
                   av_packet_rescale_ts(&packet,
                                        _ifmt_ctx->streams[stream_index]->time_base,
                                        _ofmt_ctx->streams[stream_index]->time_base);

                   ret = av_interleaved_write_frame(_ofmt_ctx, &packet);
                   if (ret < 0)
                       goto end;
               }
               av_free_packet(&packet);
           }
           // *****************************************************************************************

           /* flush filters and encoders */
           for (i = 0; i < _ifmt_ctx->nb_streams; i++) {
               /* flush filter */
               if (!_filter_ctx[i].filter_graph)
                   continue;
               ret = filter_encode_write_frame(NULL, i);
               if (ret < 0) {
                   av_log(NULL, AV_LOG_ERROR, "Flushing filter failed\n");
                   goto end;
               }

               /* flush encoder */
               ret = flush_encoder(i);
               if (ret < 0) {
                   av_log(NULL, AV_LOG_ERROR, "Flushing encoder failed\n");
                   goto end;
               }
           }
           av_write_trailer(_ofmt_ctx);
           av_bitstream_filter_close(h264bsfc);
           av_bitstream_filter_close(aacbsfc);
       } else {
           NSLog(@"Unable to resolve url for %@",_mediaFile.url.lastPathComponent);
       }
       [_srcURL stopAccessingSecurityScopedResource];

    end:
       av_free_packet(&packet);
       av_frame_free(&frame);
       for (i = 0; i < _ifmt_ctx->nb_streams; i++) {
           avcodec_close(_ifmt_ctx->streams[i]->codec);
           if (_ofmt_ctx && _ofmt_ctx->nb_streams > i && _ofmt_ctx->streams[i] && _ofmt_ctx->streams[i]->codec)
               avcodec_close(_ofmt_ctx->streams[i]->codec);
           if (_filter_ctx && _filter_ctx[i].filter_graph)
               avfilter_graph_free(&_filter_ctx[i].filter_graph);
       }
       av_free(_filter_ctx);
       avformat_close_input(&_ifmt_ctx);
       if (_ofmt_ctx && !(_ofmt_ctx->oformat->flags & AVFMT_NOFILE))
           avio_closep(&_ofmt_ctx->pb);
       avformat_free_context(_ofmt_ctx);
    }

    The following method is used to open the input file and create the ifmt_ctx.

    int open_input_file(const char *filename) {
       int ret;
       unsigned int i;

       _ifmt_ctx = NULL;
       if ((ret = avformat_open_input(&_ifmt_ctx, filename, NULL, NULL)) < 0) {
           av_log(NULL, AV_LOG_ERROR, "Cannot open input file\n");
           return ret;
       }

       if ((ret = avformat_find_stream_info(_ifmt_ctx, NULL)) < 0) {
           av_log(NULL, AV_LOG_ERROR, "Cannot find stream information\n");
           return ret;
       }

       for (i = 0; i < _ifmt_ctx->nb_streams; i++) {
           AVStream *stream;
           AVCodecContext *codec_ctx;
           stream = _ifmt_ctx->streams[i];
           codec_ctx = stream->codec;
           /* Reencode video & audio and remux subtitles etc. */
           if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
               || codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
               /* Open decoder */
               ret = avcodec_open2(codec_ctx,
                                   avcodec_find_decoder(codec_ctx->codec_id), NULL);
               if (ret < 0) {
                   av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);
                   return ret;
               }
           }
       }

       // Remove later
       av_dump_format(_ifmt_ctx, 0, filename, 0);
       return 0;
    }

    This method is used to open the output file and create the output format context.

    int open_output_file(const char *filename) {
       AVStream *out_stream;
       AVStream *in_stream;
       AVCodecContext *dec_ctx, *enc_ctx;
       AVCodec *encoder;
       int ret;
       unsigned int i;

       _ofmt_ctx = NULL;
       avformat_alloc_output_context2(&_ofmt_ctx, NULL, NULL, filename);
       if (!_ofmt_ctx) {
           av_log(NULL, AV_LOG_ERROR, "Could not create output context\n");
           return AVERROR_UNKNOWN;
       }


       for (i = 0; i < _ifmt_ctx->nb_streams; i++) {
           out_stream = avformat_new_stream(_ofmt_ctx, NULL);
           if (!out_stream) {
               av_log(NULL, AV_LOG_ERROR, "Failed allocating output stream\n");
               return AVERROR_UNKNOWN;
           }

           in_stream = _ifmt_ctx->streams[i];
           dec_ctx = in_stream->codec;
           enc_ctx = out_stream->codec;

           if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
               // set video stream
               encoder = avcodec_find_encoder(AV_CODEC_ID_H264);
               avcodec_get_context_defaults3(enc_ctx, encoder);
               av_opt_set(enc_ctx->priv_data, "preset", "slow", 0);
               enc_ctx->height = dec_ctx->height;
               enc_ctx->width = dec_ctx->width;
               enc_ctx->bit_rate = dec_ctx->bit_rate;
               enc_ctx->time_base = out_stream->time_base = dec_ctx->time_base;
               enc_ctx->pix_fmt = encoder->pix_fmts[0];

               ret = avcodec_open2(enc_ctx, encoder, NULL);
               if (ret < 0) {
                   av_log(NULL, AV_LOG_ERROR, "Cannot open video encoder for stream #%u\n", i);
                   return ret;
               }

           } else if (dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
               // set audio stream
               //encoder = avcodec_find_encoder(AV_CODEC_ID_AAC);
               encoder = avcodec_find_encoder_by_name("libfdk_aac");
               avcodec_get_context_defaults3(enc_ctx, encoder);
               enc_ctx->profile = FF_PROFILE_AAC_HE_V2;
               enc_ctx->sample_rate = dec_ctx->sample_rate;
               enc_ctx->channel_layout = dec_ctx->channel_layout;
               enc_ctx->channels = av_get_channel_layout_nb_channels(enc_ctx->channel_layout);
               enc_ctx->sample_fmt = encoder->sample_fmts[0];
               enc_ctx->time_base = out_stream->time_base = (AVRational){1, enc_ctx->sample_rate};
               enc_ctx->bit_rate = dec_ctx->bit_rate;

               ret = avcodec_open2(enc_ctx, encoder, NULL);
               if (ret < 0) {
                   av_log(NULL, AV_LOG_ERROR, "Cannot open video encoder for stream #%u\n", i);
                   return ret;
               }

           } else if (dec_ctx->codec_type == AVMEDIA_TYPE_UNKNOWN) {
               // deal with error
               av_log(NULL, AV_LOG_FATAL, "Elementary stream #%d is of unknown type, cannot proceed\n", i);
               return AVERROR_INVALIDDATA;
           } else {
               // remux stream
               ret = avcodec_copy_context(_ofmt_ctx->streams[i]->codec,
                                          _ifmt_ctx->streams[i]->codec);
               if (ret < 0) {
                   av_log(NULL, AV_LOG_ERROR, "Copying stream context failed\n");
                   return ret;
               }
           }

           if (_ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER) {
               enc_ctx->flags |= CODEC_FLAG_GLOBAL_HEADER;
           }
       }

       av_dump_format(_ofmt_ctx, 0, filename, 1);

       NSURL *openFileURL = [Utility openPanelAt:[NSURL URLWithString:_dstURL.URLByDeletingLastPathComponent.path]
                                       withTitle:@"Transcode File"
                                         message:@"Please allow Maví to create the new file."
                                       andPrompt:@"Grant Access"];

       openFileURL = [openFileURL URLByAppendingPathComponent:_dstURL.lastPathComponent isDirectory:NO];
       if (!(_ofmt_ctx->oformat->flags & AVFMT_NOFILE)) {
           ret = avio_open(&_ofmt_ctx->pb, openFileURL.fileSystemRepresentation, AVIO_FLAG_WRITE);
           if (ret < 0) {
               av_log(NULL, AV_LOG_ERROR, "Could not open output file '%s'", filename);
               return ret;
           }
       }

       /* init muxer, write output file header */
       ret = avformat_write_header(_ofmt_ctx, NULL);
       if (ret < 0) {
           av_log(NULL, AV_LOG_ERROR, "Error occurred when opening output file\n");
           return ret;
       }

       return 0;
    }

    These two methods deal with initialising the filters and filtering.

    int init_filters(void) {
       const char *filter_spec;
       unsigned int i;
       int ret;
       _filter_ctx = av_malloc_array(_ifmt_ctx->nb_streams, sizeof(*_filter_ctx));
       if (!_filter_ctx)
           return AVERROR(ENOMEM);

       for (i = 0; i < _ifmt_ctx->nb_streams; i++) {
           _filter_ctx[i].buffersrc_ctx  = NULL;
           _filter_ctx[i].buffersink_ctx = NULL;
           _filter_ctx[i].filter_graph   = NULL;
           if (!(_ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO
                 || _ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO))
               continue;


           if (_ifmt_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
               filter_spec = "null"; /* passthrough (dummy) filter for video */
           else
               filter_spec = "anull"; /* passthrough (dummy) filter for audio */
           ret = init_filter(&_filter_ctx[i], _ifmt_ctx->streams[i]->codec,
                             _ofmt_ctx->streams[i]->codec, filter_spec);
           if (ret)
               return ret;
       }
       return 0;
    }
    int init_filter(FilteringContext* fctx, AVCodecContext *dec_ctx, AVCodecContext *enc_ctx, const char *filter_spec) {
       char args[512];
       int ret = 0;
       AVFilter *buffersrc = NULL;
       AVFilter *buffersink = NULL;
       AVFilterContext *buffersrc_ctx = NULL;
       AVFilterContext *buffersink_ctx = NULL;
       AVFilterInOut *outputs = avfilter_inout_alloc();
       AVFilterInOut *inputs  = avfilter_inout_alloc();
       AVFilterGraph *filter_graph = avfilter_graph_alloc();

       if (!outputs || !inputs || !filter_graph) {
           ret = AVERROR(ENOMEM);
           goto end;
       }

       if (dec_ctx->codec_type == AVMEDIA_TYPE_VIDEO) {
           buffersrc = avfilter_get_by_name("buffer");
           buffersink = avfilter_get_by_name("buffersink");
           if (!buffersrc || !buffersink) {
               av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
               ret = AVERROR_UNKNOWN;
               goto end;
           }

           snprintf(args, sizeof(args),
                    "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
                    dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,
                    dec_ctx->time_base.num, dec_ctx->time_base.den,
                    dec_ctx->sample_aspect_ratio.num,
                    dec_ctx->sample_aspect_ratio.den);

           ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
                                              args, NULL, filter_graph);
           if (ret < 0) {
               av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");
               goto end;
           }

           ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
                                              NULL, NULL, filter_graph);
           if (ret < 0) {
               av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");
               goto end;
           }

           ret = av_opt_set_bin(buffersink_ctx, "pix_fmts",
                                (uint8_t*)&enc_ctx->pix_fmt, sizeof(enc_ctx->pix_fmt),
                                AV_OPT_SEARCH_CHILDREN);
           if (ret < 0) {
               av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n");
               goto end;
           }
       } else if (dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
           buffersrc = avfilter_get_by_name("abuffer");
           buffersink = avfilter_get_by_name("abuffersink");
           if (!buffersrc || !buffersink) {
               av_log(NULL, AV_LOG_ERROR, "filtering source or sink element not found\n");
               ret = AVERROR_UNKNOWN;
               goto end;
           }

           if (!dec_ctx->channel_layout)
               dec_ctx->channel_layout =
               av_get_default_channel_layout(dec_ctx->channels);
           snprintf(args, sizeof(args),
                    "time_base=%d/%d:sample_rate=%d:sample_fmt=%s:channel_layout=0x%"PRIx64,
                    dec_ctx->time_base.num, dec_ctx->time_base.den, dec_ctx->sample_rate,
                    av_get_sample_fmt_name(dec_ctx->sample_fmt),
                    dec_ctx->channel_layout);
           ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
                                              args, NULL, filter_graph);
           if (ret < 0) {
               av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer source\n");
               goto end;
           }

           ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
                                              NULL, NULL, filter_graph);
           if (ret < 0) {
               av_log(NULL, AV_LOG_ERROR, "Cannot create audio buffer sink\n");
               goto end;
           }

           ret = av_opt_set_bin(buffersink_ctx, "sample_fmts",
                                (uint8_t*)&enc_ctx->sample_fmt, sizeof(enc_ctx->sample_fmt),
                                AV_OPT_SEARCH_CHILDREN);
           if (ret < 0) {
               av_log(NULL, AV_LOG_ERROR, "Cannot set output sample format\n");
               goto end;
           }

           ret = av_opt_set_bin(buffersink_ctx, "channel_layouts",
                                (uint8_t*)&enc_ctx->channel_layout,
                                sizeof(enc_ctx->channel_layout), AV_OPT_SEARCH_CHILDREN);
           if (ret < 0) {
               av_log(NULL, AV_LOG_ERROR, "Cannot set output channel layout\n");
               goto end;
           }

           ret = av_opt_set_bin(buffersink_ctx, "sample_rates",
                                (uint8_t*)&enc_ctx->sample_rate, sizeof(enc_ctx->sample_rate),
                                AV_OPT_SEARCH_CHILDREN);
           if (ret < 0) {
               av_log(NULL, AV_LOG_ERROR, "Cannot set output sample rate\n");
               goto end;
           }
       } else {
           ret = AVERROR_UNKNOWN;
           goto end;
       }

       /* Endpoints for the filter graph. */
       outputs->name       = av_strdup("in");
       outputs->filter_ctx = buffersrc_ctx;
       outputs->pad_idx    = 0;
       outputs->next       = NULL;

       inputs->name       = av_strdup("out");
       inputs->filter_ctx = buffersink_ctx;
       inputs->pad_idx    = 0;
       inputs->next       = NULL;

       if (!outputs->name || !inputs->name) {
           ret = AVERROR(ENOMEM);
           goto end;
       }

       if ((ret = avfilter_graph_parse_ptr(filter_graph, filter_spec,
                                           &inputs, &outputs, NULL)) < 0)
           goto end;

       if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0)
           goto end;

       /* Fill FilteringContext */
       fctx->buffersrc_ctx = buffersrc_ctx;
       fctx->buffersink_ctx = buffersink_ctx;
       fctx->filter_graph = filter_graph;

    end:
       avfilter_inout_free(&inputs);
       avfilter_inout_free(&outputs);

       return ret;
    }

    Finally these two methods take care of writing the frames.

    int encode_write_frame(AVFrame *filt_frame, unsigned int stream_index, int *got_frame) {
       int ret;
       int got_frame_local;
       AVPacket enc_pkt;
       int (*enc_func)(AVCodecContext *, AVPacket *, const AVFrame *, int *) =
       (_ifmt_ctx->streams[stream_index]->codec->codec_type ==
        AVMEDIA_TYPE_VIDEO) ? avcodec_encode_video2 : avcodec_encode_audio2;

       if (!got_frame)
           got_frame = &got_frame_local;

       av_log(NULL, AV_LOG_INFO, "Encoding frame\n");
       /* encode filtered frame */
       enc_pkt.data = NULL;
       enc_pkt.size = 0;
       av_init_packet(&enc_pkt);
       ret = enc_func(_ofmt_ctx->streams[stream_index]->codec, &enc_pkt,
                      filt_frame, got_frame);
       av_frame_free(&filt_frame);
       if (ret < 0)
           return ret;
       if (!(*got_frame))
           return 0;

       /* prepare packet for muxing */
       enc_pkt.stream_index = stream_index;
       av_packet_rescale_ts(&enc_pkt,
                            _ofmt_ctx->streams[stream_index]->codec->time_base,
                            _ofmt_ctx->streams[stream_index]->time_base);

       av_log(NULL, AV_LOG_DEBUG, "Muxing frame\n");
       /* mux encoded frame */
       ret = av_interleaved_write_frame(_ofmt_ctx, &enc_pkt);
       return ret;
    }
    int filter_encode_write_frame(AVFrame *frame, unsigned int stream_index)
    {
       int ret;
       AVFrame *filt_frame;

       av_log(NULL, AV_LOG_INFO, "Pushing decoded frame to filters\n");
       /* push the decoded frame into the filtergraph */
       ret = av_buffersrc_add_frame_flags(_filter_ctx[stream_index].buffersrc_ctx,
                                          frame, 0);
       if (ret < 0) {
           av_log(NULL, AV_LOG_ERROR, "Error while feeding the filtergraph\n");
           return ret;
       }

       /* pull filtered frames from the filtergraph */
       while (1) {
           filt_frame = av_frame_alloc();
           if (!filt_frame) {
               ret = AVERROR(ENOMEM);
               break;
           }
           av_log(NULL, AV_LOG_INFO, "Pulling filtered frame from filters\n");
           ret = av_buffersink_get_frame(_filter_ctx[stream_index].buffersink_ctx,
                                         filt_frame);
           if (ret < 0) {
               /* if no more frames for output - returns AVERROR(EAGAIN)
                * if flushed and no more frames for output - returns AVERROR_EOF
                * rewrite retcode to 0 to show it as normal procedure completion
                */
               if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
                   ret = 0;
               av_frame_free(&filt_frame);
               break;
           }

           filt_frame->pict_type = AV_PICTURE_TYPE_NONE;
           ret = encode_write_frame(filt_frame, stream_index, NULL);
           if (ret < 0)
               break;
       }

       return ret;
    }
  • How to extract subtitles from .wtv file using ffmpeg ?

    20 mai 2013, par user1978421

    does anyone know how to extract subtitles from a wtv file using ffmpeg ?
    i have tried many different commands. None of them works.
    the closest is this one

    ffmpeg -i input.wtv -vn -an -codec:s:0 srt test.srt

    But it complains Error while encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height

    what does that really mean ??

    The complete console output is as follows

    ffmpeg version N-52233-gee94362 Copyright (c) 2000-2013 the FFmpeg developers
     built on Apr 18 2013 02:50:33 with gcc 4.8.0 (GCC)
     configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetype --enable-libgsm --enable-libilbc --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib
     libavutil      52. 26.100 / 52. 26.100
     libavcodec     55.  2.100 / 55.  2.100
     libavformat    55.  2.100 / 55.  2.100
     libavdevice    55.  0.100 / 55.  0.100
     libavfilter     3. 56.103 /  3. 56.103
     libswscale      2.  2.100 /  2.  2.100
     libswresample   0. 17.102 /  0. 17.102
     libpostproc    52.  3.100 / 52.  3.100
    [wtv @ 0269bac0] truncated file
       Last message repeated 3 times
    [mpeg2video @ 02692f60] Invalid frame dimensions 0x0.
       Last message repeated 12 times
    [wtv @ 0269bac0] max_analyze_duration 5000000 reached at 5016000 microseconds
    Input #0, wtv, from '16 and Pregnant- Unseen Moments_Viva_2013_04_09_19_57_00.wtv':
     Metadata:
       WM/MediaClassPrimaryID: db9830bd-3ab3-4fab-8a371a995f7ff74
       WM/MediaClassSecondaryID: ba7f258a-62f7-47a9-b21f4651c42a000
       Title           : 16 and Pregnant: Unseen Moments
       WM/SubTitleDescription: Dr Drew hosts a look at the exclusive moments we didn't see from the second season of 16 and Pregnant.
       genre           : Documentary;Reality TV
       WM/OriginalReleaseTime: 0
       WM/MediaCredits : ;;;
       service_provider: Viva
       service_name    : Viva
       WM/MediaOriginalChannel: 16
       WM/MediaOriginalChannelSubNumber: 0
       WM/MediaOriginalBroadcastDateTime: 2011-12-13T00:00:00Z
       WM/MediaOriginalRunTime: 38377217299
       WM/MediaIsStereo: false
       WM/MediaIsRepeat: true
       WM/MediaIsLive  : false
       WM/MediaIsTape  : false
       WM/MediaIsDelay : false
       WM/MediaIsSubtitled: false
       WM/MediaIsMovie : false
       WM/MediaIsPremiere: false
       WM/MediaIsFinale: false
       WM/MediaIsSAP   : false
       WM/MediaIsSport : false
       WM/Provider     : MediaCenterDefault
       WM/VideoClosedCaptioning: false
       WM/WMRVEncodeTime: 2013-04-09 18:57:02
       WM/WMRVSeriesUID: !GenericSeries!16 and Pregnant: Unseen Moments
       WM/WMRVServiceID: !Generated!be3ff88ee57a4cadb0334ef3df5bc91b
       WM/WMRVProgramID: !MCProgram!37282063
       WM/WMRVRequestID: 0
       WM/WMRVScheduleItemID: 0
       WM/WMRVQuality  : 0
       WM/WMRVOriginalSoftPrePadding: 480
       WM/WMRVOriginalSoftPostPadding: 60
       WM/WMRVHardPrePadding: -300
       WM/WMRVHardPostPadding: 0
       WM/WMRVATSCContent: false
       WM/WMRVDTVContent: true
       WM/WMRVHDContent: false
       Duration        : 38379654765
       WM/WMRVEndTime  : 2013-04-09 20:01:00
       WM/WMRVBitrate  : 2.118481
       WM/WMRVKeepUntil: 0
       WM/WMRVActualSoftPrePadding: 477
       WM/WMRVActualSoftPostPadding: 60
       WM/WMRVContentProtected: false
       WM/WMRVContentProtectedPercent: 0
       WM/WMRVExpirationSpan: 9223372036854775807
       WM/WMRVInBandRatingSystem: 255
       WM/WMRVInBandRatingLevel: 255
       WM/WMRVInBandRatingAttributes: 0
       WM/WMRVWatched  : true
     Duration: 01:03:58.07, start: 1.509175, bitrate: 2118 kb/s
       Stream #0:0[0x22](eng): Subtitle: dvb_subtitle
       Stream #0:1[0x23](eng): Audio: mp2 (P[0][0][0] / 0x0050), 48000 Hz, stereo, s16p, 256 kb/s
       Stream #0:2[0x24]: Video: mpeg2video (Main), yuv420p, 544x576 [SAR 32:17 DAR 16:9], 25 fps, 25 tbr, 10000k tbn, 50 tbc
       Stream #0:3[0x0]: Video: mjpeg, yuvj420p, 189x200 [SAR 96:96 DAR 189:200], 90k tbr, 90k tbn, 90k tbc
       Metadata:
         title           : TV Thumbnail
    File 'test.srt' already exists. Overwrite ? [y/N] Output #0, srt, to 'test.srt':
     Metadata:
       WM/MediaClassPrimaryID: db9830bd-3ab3-4fab-8a371a995f7ff74
       WM/MediaClassSecondaryID: ba7f258a-62f7-47a9-b21f4651c42a000
       Title           : 16 and Pregnant: Unseen Moments
       WM/SubTitleDescription: Dr Drew hosts a look at the exclusive moments we didn't see from the second season of 16 and Pregnant.
       genre           : Documentary;Reality TV
       WM/OriginalReleaseTime: 0
       WM/MediaCredits : ;;;
       service_provider: Viva
       service_name    : Viva
       WM/MediaOriginalChannel: 16
       WM/MediaOriginalChannelSubNumber: 0
       WM/MediaOriginalBroadcastDateTime: 2011-12-13T00:00:00Z
       WM/MediaOriginalRunTime: 38377217299
       WM/MediaIsStereo: false
       WM/MediaIsRepeat: true
       WM/MediaIsLive  : false
       WM/MediaIsTape  : false
       WM/MediaIsDelay : false
       WM/MediaIsSubtitled: false
       WM/MediaIsMovie : false
       WM/MediaIsPremiere: false
       WM/MediaIsFinale: false
       WM/MediaIsSAP   : false
       WM/MediaIsSport : false
       WM/Provider     : MediaCenterDefault
       WM/VideoClosedCaptioning: false
       WM/WMRVEncodeTime: 2013-04-09 18:57:02
       WM/WMRVSeriesUID: !GenericSeries!16 and Pregnant: Unseen Moments
       WM/WMRVServiceID: !Generated!be3ff88ee57a4cadb0334ef3df5bc91b
       WM/WMRVProgramID: !MCProgram!37282063
       WM/WMRVRequestID: 0
       WM/WMRVScheduleItemID: 0
       WM/WMRVQuality  : 0
       WM/WMRVOriginalSoftPrePadding: 480
       WM/WMRVOriginalSoftPostPadding: 60
       WM/WMRVHardPrePadding: -300
       WM/WMRVHardPostPadding: 0
       WM/WMRVATSCContent: false
       WM/WMRVDTVContent: true
       WM/WMRVHDContent: false
       Duration        : 38379654765
       WM/WMRVEndTime  : 2013-04-09 20:01:00
       WM/WMRVBitrate  : 2.118481
       WM/WMRVKeepUntil: 0
       WM/WMRVActualSoftPrePadding: 477
       WM/WMRVActualSoftPostPadding: 60
       WM/WMRVContentProtected: false
       WM/WMRVContentProtectedPercent: 0
       WM/WMRVExpirationSpan: 9223372036854775807
       WM/WMRVInBandRatingSystem: 255
       WM/WMRVInBandRatingLevel: 255
       WM/WMRVInBandRatingAttributes: 0
       WM/WMRVWatched  : true
       Stream #0:0(eng): Subtitle: srt
    Stream mapping:
     Stream #0:0 -> #0:0 (dvbsub -> srt)
    Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
  • What does the word "context" usually mean in structures ?

    8 juin 2015, par ArturPhilibin

    I’m trying to build an application using some of ffmpeg’s libraries and I’m noticing many data structures with the word "Context" in them.

    You can see some here http://www.ffmpeg.org/doxygen/trunk/classes.html

    I don’t understand the use of the word "context" in this.. context.

    Any hints as to what it generally means ?