Recherche avancée

Médias (91)

Autres articles (43)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • Librairies et logiciels spécifiques aux médias

    10 décembre 2010, par

    Pour un fonctionnement correct et optimal, plusieurs choses sont à prendre en considération.
    Il est important, après avoir installé apache2, mysql et php5, d’installer d’autres logiciels nécessaires dont les installations sont décrites dans les liens afférants. Un ensemble de librairies multimedias (x264, libtheora, libvpx) utilisées pour l’encodage et le décodage des vidéos et sons afin de supporter le plus grand nombre de fichiers possibles. Cf. : ce tutoriel ; FFMpeg avec le maximum de décodeurs et (...)

  • Other interesting software

    13 avril 2011, par

    We don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
    The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
    We don’t know them, we didn’t try them, but you can take a peek.
    Videopress
    Website : http://videopress.com/
    License : GNU/GPL v2
    Source code : (...)

Sur d’autres sites (3315)

  • libavcodec/libx264 do not produce B-frames

    6 novembre 2013, par Rob Schmidt

    I am writing an application in C++ that uses libavcodec with libx264 to encode video. However, the encoded data ended up being much larger than I expected. I analyzed the results and discovered that my encoding never produced B-frames, only I- and P-frames.

    I created a standalone utility based on the ffmpeg source code and examples to test my encoder setup. It reads in an H.264 file, re-encodes the decoded frames, and outputs the result to a file using the ITU H.264 Annex B format. I also used ffmpeg to perform the same operation so I could compare against a known good implementation. My utility never outputs B-frames whereas ffmpeg does.

    I have since tried to figure out what ffmpeg does that my code doesn't. I first tried manually specifying encoder settings related to B-frames. This had no effect.

    I then tried running both ffmpeg and my utility under gdb and comparing the contents of the AVStream, AVCodecContext, and X264Context prior to opening the encoder and manually setting any fields that appeared different. Even with identical settings, I still only produce I- and P-frames.

    Finally, I thought that perhaps the problem was with my timestamp handling. I reworked my test utility to mimic the pipeline used by ffmpeg and to output timestamp debugging output like ffmpeg does. Even with my timestamps identical to ffmpeg's I still get no B-frames.

    At this point I don't know what else to try. When I run ffmpeg, I run it with the command line below. Note that aside from the "superfast" preset, I pretty much use the default values.

    ffmpeg -v debug -i ~/annexb.264 -codec:v libx264 -preset superfast -g 30 -f h264 ./out.264

    The code that configures the encoder is listed below. It specifies the "superfast" preset too.

    static AVStream *add_video_stream(AVFormatContext *output_ctx, AVCodec **output_codec, enum AVCodecID codec_id)
    {
       *output_codec = avcodec_find_encoder(codec_id);
       if (*output_codec == NULL) {
           printf("Could not find encoder for '%s' (%d)\n", avcodec_get_name(codec_id), codec_id);
           return NULL;
       }

       AVStream *output_stream = avformat_new_stream(output_ctx, *output_codec);
       if (output_stream == NULL) {
           printf("Could not create video stream.\n");
           return NULL;
       }
       output_stream->id = output_ctx->nb_streams - 1;
       AVCodecContext *codec_ctx = output_stream->codec;

       avcodec_get_context_defaults3(codec_ctx, *output_codec);

       codec_ctx->width = 1280;
       codec_ctx->height = 720;

       codec_ctx->time_base.den = 15000;
       codec_ctx->time_base.num = 1001;

    /*    codec_ctx->gop_size = 30;*/
       codec_ctx->pix_fmt = AV_PIX_FMT_YUVJ420P;

       // try to force B-frame output
    /*    codec_ctx->max_b_frames = 3;*/
    /*    codec_ctx->b_frame_strategy = 2;*/

       output_stream->sample_aspect_ratio.num = 1;
       output_stream->sample_aspect_ratio.den = 1;

       codec_ctx->sample_aspect_ratio.num = 1;
       codec_ctx->sample_aspect_ratio.den = 1;

       codec_ctx->chroma_sample_location = AVCHROMA_LOC_LEFT;

       codec_ctx->bits_per_raw_sample = 8;

       if ((output_ctx->oformat->flags & AVFMT_GLOBALHEADER) != 0) {
           codec_ctx->flags |= CODEC_FLAG_GLOBAL_HEADER;
       }

       return output_stream;
    }


    int main(int argc, char **argv)
    {
       // ... open input file

       avformat_alloc_output_context2(&output_ctx, NULL, "h264", output_path);
       if (output_ctx == NULL) {
           fprintf(stderr, "Unable to allocate output context.\n");
           return 1;
       }

       AVCodec *output_codec = NULL;
       output_stream = add_video_stream(output_ctx, &output_codec, output_ctx->oformat->video_codec);
       if (output_stream == NULL) {
           fprintf(stderr, "Error adding video stream to output context.\n");
           return 1;
       }
       encode_ctx = output_stream->codec;

       // seems to have no effect
    #if 0
       if (decode_ctx->extradata_size != 0) {
           size_t extradata_size = decode_ctx->extradata_size;
           printf("extradata_size: %zu\n", extradata_size);
           encode_ctx->extradata = av_mallocz(extradata_size + FF_INPUT_BUFFER_PADDING_SIZE);
           memcpy(encode_ctx->extradata, decode_ctx->extradata, extradata_size);
           encode_ctx->extradata_size = extradata_size;
       }
    #endif // 0

       AVDictionary *opts = NULL;
       av_dict_set(&opts, "preset", "superfast", 0);
       // av_dict_set(&opts, "threads", "auto", 0); // seems to have no effect

       ret = avcodec_open2(encode_ctx, output_codec, &opts);
       if (ret < 0) {
           fprintf(stderr, "Unable to open output video cocec: %s\n", av_err2str(ret));
           return 1;
       }

       // ... decoding/encoding loop, clean up, etc.

       return 0;
    }

    My test utility produces the following debug output in which you can see there are no B-frames produced :

    [libx264 @ 0x1b8c9c0] using mv_range_thread = 56
    [libx264 @ 0x1b8c9c0] using SAR=1/1
    [libx264 @ 0x1b8c9c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
    [libx264 @ 0x1b8c9c0] profile High, level 3.1
    Output #0, h264, to './out.264':
       Stream #0:0, 0, 1/90000: Video: h264, yuvj420p, 1280x720 [SAR 1:1 DAR 16:9], 1001/15000, q=-1--1, 90k tbn, 14.99 tbc

    <snip>

    [libx264 @ 0x1b8c9c0] frame=   0 QP=17.22 NAL=3 Slice:I Poc:0   I:3600 P:0    SKIP:0    size=122837 bytes
    [libx264 @ 0x1b8c9c0] frame=   1 QP=18.03 NAL=2 Slice:P Poc:2   I:411  P:1825 SKIP:1364 size=25863 bytes
    [libx264 @ 0x1b8c9c0] frame=   2 QP=17.03 NAL=2 Slice:P Poc:4   I:369  P:2159 SKIP:1072 size=37880 bytes
    [libx264 @ 0x1b8c9c0] frame=   3 QP=16.90 NAL=2 Slice:P Poc:6   I:498  P:2330 SKIP:772  size=50509 bytes
    [libx264 @ 0x1b8c9c0] frame=   4 QP=16.68 NAL=2 Slice:P Poc:8   I:504  P:2233 SKIP:863  size=50791 bytes
    [libx264 @ 0x1b8c9c0] frame=   5 QP=16.52 NAL=2 Slice:P Poc:10  I:513  P:2286 SKIP:801  size=51820 bytes
    [libx264 @ 0x1b8c9c0] frame=   6 QP=16.49 NAL=2 Slice:P Poc:12  I:461  P:2293 SKIP:846  size=51311 bytes
    [libx264 @ 0x1b8c9c0] frame=   7 QP=16.65 NAL=2 Slice:P Poc:14  I:476  P:2287 SKIP:837  size=51196 bytes
    [libx264 @ 0x1b8c9c0] frame=   8 QP=16.66 NAL=2 Slice:P Poc:16  I:508  P:2240 SKIP:852  size=51577 bytes
    [libx264 @ 0x1b8c9c0] frame=   9 QP=16.55 NAL=2 Slice:P Poc:18  I:477  P:2278 SKIP:845  size=51531 bytes
    [libx264 @ 0x1b8c9c0] frame=  10 QP=16.67 NAL=2 Slice:P Poc:20  I:517  P:2233 SKIP:850  size=51946 bytes

    <snip>

    [libx264 @ 0x1b8c9c0] frame I:7     Avg QP:13.71  size:152207
    [libx264 @ 0x1b8c9c0] frame P:190   Avg QP:16.66  size: 50949
    [libx264 @ 0x1b8c9c0] mb I  I16..4: 27.1% 30.8% 42.1%
    [libx264 @ 0x1b8c9c0] mb P  I16..4:  6.8%  6.0%  0.8%  P16..4: 61.8%  0.0%  0.0%  0.0%  0.0%    skip:24.7%
    [libx264 @ 0x1b8c9c0] 8x8 transform intra:41.2% inter:86.9%
    [libx264 @ 0x1b8c9c0] coded y,uvDC,uvAC intra: 92.2% 28.3% 5.4% inter: 50.3% 1.9% 0.0%
    [libx264 @ 0x1b8c9c0] i16 v,h,dc,p:  7%  7% 77%  8%
    [libx264 @ 0x1b8c9c0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu:  7% 15% 49%  6%  4%  3%  5%  3%  8%
    [libx264 @ 0x1b8c9c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 19% 25% 24%  6%  7%  4%  6%  3%  6%
    [libx264 @ 0x1b8c9c0] i8c dc,h,v,p: 72% 14% 10%  4%
    [libx264 @ 0x1b8c9c0] Weighted P-Frames: Y:0.0% UV:0.0%
    [libx264 @ 0x1b8c9c0] kb/s:6539.11
    </snip></snip>

    ffmpeg, on the other hand, produces the following output that is almost identical but includes B-frames :

    [libx264 @ 0x20b9c40] using mv_range_thread = 56
    [libx264 @ 0x20b9c40] using SAR=1/1
    [libx264 @ 0x20b9c40] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
    [libx264 @ 0x20b9c40] profile High, level 3.1
    [h264 @ 0x20b8160] detected 4 logical cores
    Output #0, h264, to &#39;./out.264&#39;:
     Metadata:
       encoder         : Lavf54.63.104
       Stream #0:0, 0, 1/90000: Video: h264, yuvj420p, 1280x720 [SAR 1:1 DAR 16:9], 1001/15000, q=-1--1, 90k tbn, 14.99 tbc
    Stream mapping:
     Stream #0:0 -> #0:0 (h264 -> libx264)

    <snip>

    [libx264 @ 0x20b9c40] frame=   0 QP=17.22 NAL=3 Slice:I Poc:0   I:3600 P:0    SKIP:0    size=122835 bytes
    [libx264 @ 0x20b9c40] frame=   1 QP=18.75 NAL=2 Slice:P Poc:8   I:984  P:2045 SKIP:571  size=54208 bytes
    [libx264 @ 0x20b9c40] frame=   2 QP=19.40 NAL=2 Slice:B Poc:4   I:447  P:1581 SKIP:1572 size=24930 bytes
    [libx264 @ 0x20b9c40] frame=   3 QP=19.78 NAL=0 Slice:B Poc:2   I:199  P:1002 SKIP:2399 size=10717 bytes
    [libx264 @ 0x20b9c40] frame=   4 QP=20.19 NAL=0 Slice:B Poc:6   I:204  P:1155 SKIP:2241 size=15937 bytes
    [libx264 @ 0x20b9c40] frame=   5 QP=18.11 NAL=2 Slice:P Poc:16  I:990  P:2221 SKIP:389  size=64240 bytes
    [libx264 @ 0x20b9c40] frame=   6 QP=19.35 NAL=2 Slice:B Poc:12  I:439  P:1784 SKIP:1377 size=34048 bytes
    [libx264 @ 0x20b9c40] frame=   7 QP=19.88 NAL=0 Slice:B Poc:10  I:275  P:1035 SKIP:2290 size=16911 bytes
    [libx264 @ 0x20b9c40] frame=   8 QP=19.91 NAL=0 Slice:B Poc:14  I:257  P:1270 SKIP:2073 size=19172 bytes
    [libx264 @ 0x20b9c40] frame=   9 QP=17.90 NAL=2 Slice:P Poc:24  I:962  P:2204 SKIP:434  size=67439 bytes
    [libx264 @ 0x20b9c40] frame=  10 QP=18.84 NAL=2 Slice:B Poc:20  I:474  P:1911 SKIP:1215 size=37742 bytes

    <snip>

    [libx264 @ 0x20b9c40] frame I:7     Avg QP:15.95  size:130124
    [libx264 @ 0x20b9c40] frame P:52    Avg QP:17.78  size: 64787
    [libx264 @ 0x20b9c40] frame B:138   Avg QP:19.32  size: 26231
    [libx264 @ 0x20b9c40] consecutive B-frames:  6.6%  0.0%  0.0% 93.4%
    [libx264 @ 0x20b9c40] mb I  I16..4: 30.2% 35.2% 34.6%
    [libx264 @ 0x20b9c40] mb P  I16..4: 13.9% 11.4%  0.3%  P16..4: 60.4%  0.0%  0.0%  0.0%  0.0%    skip:13.9%
    [libx264 @ 0x20b9c40] mb B  I16..4:  5.7%  3.3%  0.0%  B16..8: 15.8%  0.0%  0.0%  direct:25.7%  skip:49.5%  L0:43.2% L1:37.3% BI:19.5%
    [libx264 @ 0x20b9c40] 8x8 transform intra:39.4% inter:77.2%
    [libx264 @ 0x20b9c40] coded y,uvDC,uvAC intra: 90.7% 26.6% 3.0% inter: 34.0% 4.1% 0.0%
    [libx264 @ 0x20b9c40] i16 v,h,dc,p:  7%  7% 77%  9%
    [libx264 @ 0x20b9c40] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu:  7% 16% 51%  5%  4%  3%  5%  3%  7%
    [libx264 @ 0x20b9c40] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 22% 27% 20%  6%  6%  3%  6%  3%  6%
    [libx264 @ 0x20b9c40] i8c dc,h,v,p: 71% 15% 11%  3%
    [libx264 @ 0x20b9c40] Weighted P-Frames: Y:0.0% UV:0.0%
    [libx264 @ 0x20b9c40] kb/s:4807.16
    </snip></snip>

    I'm sure I'm missing something simple, but I can't for the life of me see what it is. Any assistance would be greatly appreciated.

  • libavcodec/libx264 do not produce B-frames

    6 novembre 2013, par Rob Schmidt

    I am writing an application in C++ that uses libavcodec with libx264 to encode video. However, the encoded data ended up being much larger than I expected. I analyzed the results and discovered that my encoding never produced B-frames, only I- and P-frames.

    I created a standalone utility based on the ffmpeg source code and examples to test my encoder setup. It reads in an H.264 file, re-encodes the decoded frames, and outputs the result to a file using the ITU H.264 Annex B format. I also used ffmpeg to perform the same operation so I could compare against a known good implementation. My utility never outputs B-frames whereas ffmpeg does.

    I have since tried to figure out what ffmpeg does that my code doesn’t. I first tried manually specifying encoder settings related to B-frames. This had no effect.

    I then tried running both ffmpeg and my utility under gdb and comparing the contents of the AVStream, AVCodecContext, and X264Context prior to opening the encoder and manually setting any fields that appeared different. Even with identical settings, I still only produce I- and P-frames.

    Finally, I thought that perhaps the problem was with my timestamp handling. I reworked my test utility to mimic the pipeline used by ffmpeg and to output timestamp debugging output like ffmpeg does. Even with my timestamps identical to ffmpeg’s I still get no B-frames.

    At this point I don’t know what else to try. When I run ffmpeg, I run it with the command line below. Note that aside from the "superfast" preset, I pretty much use the default values.

    ffmpeg -v debug -i ~/annexb.264 -codec:v libx264 -preset superfast -g 30 -f h264 ./out.264

    The code that configures the encoder is listed below. It specifies the "superfast" preset too.

    static AVStream *add_video_stream(AVFormatContext *output_ctx, AVCodec **output_codec, enum AVCodecID codec_id)
    {
       *output_codec = avcodec_find_encoder(codec_id);
       if (*output_codec == NULL) {
           printf("Could not find encoder for '%s' (%d)\n", avcodec_get_name(codec_id), codec_id);
           return NULL;
       }

       AVStream *output_stream = avformat_new_stream(output_ctx, *output_codec);
       if (output_stream == NULL) {
           printf("Could not create video stream.\n");
           return NULL;
       }
       output_stream->id = output_ctx->nb_streams - 1;
       AVCodecContext *codec_ctx = output_stream->codec;

       avcodec_get_context_defaults3(codec_ctx, *output_codec);

       codec_ctx->width = 1280;
       codec_ctx->height = 720;

       codec_ctx->time_base.den = 15000;
       codec_ctx->time_base.num = 1001;

    /*    codec_ctx->gop_size = 30;*/
       codec_ctx->pix_fmt = AV_PIX_FMT_YUVJ420P;

       // try to force B-frame output
    /*    codec_ctx->max_b_frames = 3;*/
    /*    codec_ctx->b_frame_strategy = 2;*/

       output_stream->sample_aspect_ratio.num = 1;
       output_stream->sample_aspect_ratio.den = 1;

       codec_ctx->sample_aspect_ratio.num = 1;
       codec_ctx->sample_aspect_ratio.den = 1;

       codec_ctx->chroma_sample_location = AVCHROMA_LOC_LEFT;

       codec_ctx->bits_per_raw_sample = 8;

       if ((output_ctx->oformat->flags &amp; AVFMT_GLOBALHEADER) != 0) {
           codec_ctx->flags |= CODEC_FLAG_GLOBAL_HEADER;
       }

       return output_stream;
    }


    int main(int argc, char **argv)
    {
       // ... open input file

       avformat_alloc_output_context2(&amp;output_ctx, NULL, "h264", output_path);
       if (output_ctx == NULL) {
           fprintf(stderr, "Unable to allocate output context.\n");
           return 1;
       }

       AVCodec *output_codec = NULL;
       output_stream = add_video_stream(output_ctx, &amp;output_codec, output_ctx->oformat->video_codec);
       if (output_stream == NULL) {
           fprintf(stderr, "Error adding video stream to output context.\n");
           return 1;
       }
       encode_ctx = output_stream->codec;

       // seems to have no effect
    #if 0
       if (decode_ctx->extradata_size != 0) {
           size_t extradata_size = decode_ctx->extradata_size;
           printf("extradata_size: %zu\n", extradata_size);
           encode_ctx->extradata = av_mallocz(extradata_size + FF_INPUT_BUFFER_PADDING_SIZE);
           memcpy(encode_ctx->extradata, decode_ctx->extradata, extradata_size);
           encode_ctx->extradata_size = extradata_size;
       }
    #endif // 0

       AVDictionary *opts = NULL;
       av_dict_set(&amp;opts, "preset", "superfast", 0);
       // av_dict_set(&amp;opts, "threads", "auto", 0); // seems to have no effect

       ret = avcodec_open2(encode_ctx, output_codec, &amp;opts);
       if (ret &lt; 0) {
           fprintf(stderr, "Unable to open output video cocec: %s\n", av_err2str(ret));
           return 1;
       }

       // ... decoding/encoding loop, clean up, etc.

       return 0;
    }

    My test utility produces the following debug output in which you can see there are no B-frames produced :

    [libx264 @ 0x1b8c9c0] using mv_range_thread = 56
    [libx264 @ 0x1b8c9c0] using SAR=1/1
    [libx264 @ 0x1b8c9c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
    [libx264 @ 0x1b8c9c0] profile High, level 3.1
    Output #0, h264, to './out.264':
       Stream #0:0, 0, 1/90000: Video: h264, yuvj420p, 1280x720 [SAR 1:1 DAR 16:9], 1001/15000, q=-1--1, 90k tbn, 14.99 tbc

    <snip>

    [libx264 @ 0x1b8c9c0] frame=   0 QP=17.22 NAL=3 Slice:I Poc:0   I:3600 P:0    SKIP:0    size=122837 bytes
    [libx264 @ 0x1b8c9c0] frame=   1 QP=18.03 NAL=2 Slice:P Poc:2   I:411  P:1825 SKIP:1364 size=25863 bytes
    [libx264 @ 0x1b8c9c0] frame=   2 QP=17.03 NAL=2 Slice:P Poc:4   I:369  P:2159 SKIP:1072 size=37880 bytes
    [libx264 @ 0x1b8c9c0] frame=   3 QP=16.90 NAL=2 Slice:P Poc:6   I:498  P:2330 SKIP:772  size=50509 bytes
    [libx264 @ 0x1b8c9c0] frame=   4 QP=16.68 NAL=2 Slice:P Poc:8   I:504  P:2233 SKIP:863  size=50791 bytes
    [libx264 @ 0x1b8c9c0] frame=   5 QP=16.52 NAL=2 Slice:P Poc:10  I:513  P:2286 SKIP:801  size=51820 bytes
    [libx264 @ 0x1b8c9c0] frame=   6 QP=16.49 NAL=2 Slice:P Poc:12  I:461  P:2293 SKIP:846  size=51311 bytes
    [libx264 @ 0x1b8c9c0] frame=   7 QP=16.65 NAL=2 Slice:P Poc:14  I:476  P:2287 SKIP:837  size=51196 bytes
    [libx264 @ 0x1b8c9c0] frame=   8 QP=16.66 NAL=2 Slice:P Poc:16  I:508  P:2240 SKIP:852  size=51577 bytes
    [libx264 @ 0x1b8c9c0] frame=   9 QP=16.55 NAL=2 Slice:P Poc:18  I:477  P:2278 SKIP:845  size=51531 bytes
    [libx264 @ 0x1b8c9c0] frame=  10 QP=16.67 NAL=2 Slice:P Poc:20  I:517  P:2233 SKIP:850  size=51946 bytes

    <snip>

    [libx264 @ 0x1b8c9c0] frame I:7     Avg QP:13.71  size:152207
    [libx264 @ 0x1b8c9c0] frame P:190   Avg QP:16.66  size: 50949
    [libx264 @ 0x1b8c9c0] mb I  I16..4: 27.1% 30.8% 42.1%
    [libx264 @ 0x1b8c9c0] mb P  I16..4:  6.8%  6.0%  0.8%  P16..4: 61.8%  0.0%  0.0%  0.0%  0.0%    skip:24.7%
    [libx264 @ 0x1b8c9c0] 8x8 transform intra:41.2% inter:86.9%
    [libx264 @ 0x1b8c9c0] coded y,uvDC,uvAC intra: 92.2% 28.3% 5.4% inter: 50.3% 1.9% 0.0%
    [libx264 @ 0x1b8c9c0] i16 v,h,dc,p:  7%  7% 77%  8%
    [libx264 @ 0x1b8c9c0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu:  7% 15% 49%  6%  4%  3%  5%  3%  8%
    [libx264 @ 0x1b8c9c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 19% 25% 24%  6%  7%  4%  6%  3%  6%
    [libx264 @ 0x1b8c9c0] i8c dc,h,v,p: 72% 14% 10%  4%
    [libx264 @ 0x1b8c9c0] Weighted P-Frames: Y:0.0% UV:0.0%
    [libx264 @ 0x1b8c9c0] kb/s:6539.11
    </snip></snip>

    ffmpeg, on the other hand, produces the following output that is almost identical but includes B-frames :

    [libx264 @ 0x20b9c40] using mv_range_thread = 56
    [libx264 @ 0x20b9c40] using SAR=1/1
    [libx264 @ 0x20b9c40] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
    [libx264 @ 0x20b9c40] profile High, level 3.1
    [h264 @ 0x20b8160] detected 4 logical cores
    Output #0, h264, to './out.264':
     Metadata:
       encoder         : Lavf54.63.104
       Stream #0:0, 0, 1/90000: Video: h264, yuvj420p, 1280x720 [SAR 1:1 DAR 16:9], 1001/15000, q=-1--1, 90k tbn, 14.99 tbc
    Stream mapping:
     Stream #0:0 -> #0:0 (h264 -> libx264)

    <snip>

    [libx264 @ 0x20b9c40] frame=   0 QP=17.22 NAL=3 Slice:I Poc:0   I:3600 P:0    SKIP:0    size=122835 bytes
    [libx264 @ 0x20b9c40] frame=   1 QP=18.75 NAL=2 Slice:P Poc:8   I:984  P:2045 SKIP:571  size=54208 bytes
    [libx264 @ 0x20b9c40] frame=   2 QP=19.40 NAL=2 Slice:B Poc:4   I:447  P:1581 SKIP:1572 size=24930 bytes
    [libx264 @ 0x20b9c40] frame=   3 QP=19.78 NAL=0 Slice:B Poc:2   I:199  P:1002 SKIP:2399 size=10717 bytes
    [libx264 @ 0x20b9c40] frame=   4 QP=20.19 NAL=0 Slice:B Poc:6   I:204  P:1155 SKIP:2241 size=15937 bytes
    [libx264 @ 0x20b9c40] frame=   5 QP=18.11 NAL=2 Slice:P Poc:16  I:990  P:2221 SKIP:389  size=64240 bytes
    [libx264 @ 0x20b9c40] frame=   6 QP=19.35 NAL=2 Slice:B Poc:12  I:439  P:1784 SKIP:1377 size=34048 bytes
    [libx264 @ 0x20b9c40] frame=   7 QP=19.88 NAL=0 Slice:B Poc:10  I:275  P:1035 SKIP:2290 size=16911 bytes
    [libx264 @ 0x20b9c40] frame=   8 QP=19.91 NAL=0 Slice:B Poc:14  I:257  P:1270 SKIP:2073 size=19172 bytes
    [libx264 @ 0x20b9c40] frame=   9 QP=17.90 NAL=2 Slice:P Poc:24  I:962  P:2204 SKIP:434  size=67439 bytes
    [libx264 @ 0x20b9c40] frame=  10 QP=18.84 NAL=2 Slice:B Poc:20  I:474  P:1911 SKIP:1215 size=37742 bytes

    <snip>

    [libx264 @ 0x20b9c40] frame I:7     Avg QP:15.95  size:130124
    [libx264 @ 0x20b9c40] frame P:52    Avg QP:17.78  size: 64787
    [libx264 @ 0x20b9c40] frame B:138   Avg QP:19.32  size: 26231
    [libx264 @ 0x20b9c40] consecutive B-frames:  6.6%  0.0%  0.0% 93.4%
    [libx264 @ 0x20b9c40] mb I  I16..4: 30.2% 35.2% 34.6%
    [libx264 @ 0x20b9c40] mb P  I16..4: 13.9% 11.4%  0.3%  P16..4: 60.4%  0.0%  0.0%  0.0%  0.0%    skip:13.9%
    [libx264 @ 0x20b9c40] mb B  I16..4:  5.7%  3.3%  0.0%  B16..8: 15.8%  0.0%  0.0%  direct:25.7%  skip:49.5%  L0:43.2% L1:37.3% BI:19.5%
    [libx264 @ 0x20b9c40] 8x8 transform intra:39.4% inter:77.2%
    [libx264 @ 0x20b9c40] coded y,uvDC,uvAC intra: 90.7% 26.6% 3.0% inter: 34.0% 4.1% 0.0%
    [libx264 @ 0x20b9c40] i16 v,h,dc,p:  7%  7% 77%  9%
    [libx264 @ 0x20b9c40] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu:  7% 16% 51%  5%  4%  3%  5%  3%  7%
    [libx264 @ 0x20b9c40] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 22% 27% 20%  6%  6%  3%  6%  3%  6%
    [libx264 @ 0x20b9c40] i8c dc,h,v,p: 71% 15% 11%  3%
    [libx264 @ 0x20b9c40] Weighted P-Frames: Y:0.0% UV:0.0%
    [libx264 @ 0x20b9c40] kb/s:4807.16
    </snip></snip>

    I’m sure I’m missing something simple, but I can’t for the life of me see what it is. Any assistance would be greatly appreciated.

  • openGL ES 2.0 on android , YUV to RGB and Rendering with ffMpeg

    14 octobre 2013, par 101110101100111111101101

    My renderer dies 1 2 frames later when video shows after.

    FATAL ERROR 11 : blabla...(Exactly occurs in glDrawElements (Y part))

    I think problem is 'glPixelStorei' or 'GL_RGB', 'GL_LUMINANCE' but.. I don't get it.

    My rendering way :

    1. Decode data that got from network, (SDK Getting-> NDK Decoding), Enqueueing.

    2. Dequeueing another threads (of course synchronized) get ready to setup OpenGL ES 2.0.(SDK)

    3. When onDrawFrame, onSurfaceCreated, onSurfaceChanged methods are called, it shrink down to NDK. (My Renderer source in NDK will attach below.)

    4. Rendering.

    As you know, Fragment shader is using for conversion.
    My Data is YUV 420p (pix_fmt_YUV420p) (12bit per pixel)

    Here is my entire source.

    I haven't any knowledge about OpenGL ES before, this is first time.

    Please let me know what am I do improving performance.

    and What am I use parameters in 'glTexImage2D', 'glTexSubImage2D', 'glRenderbufferStorage' ????
    GL_LUMINANCE ? GL_RGBA ? GL_RGB ? (GL_LUMINANCE is using now)

    void Renderer::set_draw_frame(JNIEnv* jenv, jbyteArray yData, jbyteArray uData, jbyteArray vData)
    {
       for (int i = 0; i &lt; 3; i++) {
           if (yuv_data_[i] != NULL) {
               free(yuv_data_[i]);
           }
       }

     int YSIZE = -1;
     int USIZE = -1;
     int VSIZE = -1;

     if (yData != NULL) {
           YSIZE = (int)jenv->GetArrayLength(yData);
       LOG_DEBUG("YSIZE : %d", YSIZE);
           yuv_data_[0] = (unsigned char*)malloc(sizeof(unsigned char) * YSIZE);
       memset(yuv_data_[0], 0, YSIZE);
           jenv->GetByteArrayRegion(yData, 0, YSIZE, (jbyte*)yuv_data_[0]);
       yuv_data_[0] = reinterpret_cast<unsigned>(yuv_data_[0]);
       } else {
           YSIZE = (int)jenv->GetArrayLength(yData);
           yuv_data_[0] = (unsigned char*)malloc(sizeof(unsigned char) * YSIZE);
       memset(yuv_data_[0], 1, YSIZE);
     }

       if (uData != NULL) {
           USIZE = (int)jenv->GetArrayLength(uData);
       LOG_DEBUG("USIZE : %d", USIZE);
           yuv_data_[1] = (unsigned char*)malloc(sizeof(unsigned char) * USIZE);
       memset(yuv_data_[1], 0, USIZE);
           jenv->GetByteArrayRegion(uData, 0, USIZE, (jbyte*)yuv_data_[1]);
       yuv_data_[1] = reinterpret_cast<unsigned>(yuv_data_[1]);
       } else {
           USIZE = YSIZE/4;
           yuv_data_[1] = (unsigned char*)malloc(sizeof(unsigned char) * USIZE);
       memset(yuv_data_[1], 1, USIZE);
     }

       if (vData != NULL) {
           VSIZE = (int)jenv->GetArrayLength(vData);
       LOG_DEBUG("VSIZE : %d", VSIZE);
           yuv_data_[2] = (unsigned char*)malloc(sizeof(unsigned char) * VSIZE);
       memset(yuv_data_[2], 0, VSIZE);
           jenv->GetByteArrayRegion(vData, 0, VSIZE, (jbyte*)yuv_data_[2]);
       yuv_data_[2] = reinterpret_cast<unsigned>(yuv_data_[2]);
       } else {
           VSIZE = YSIZE/4;
           yuv_data_[2] = (unsigned char*)malloc(sizeof(unsigned char) * VSIZE);
       memset(yuv_data_[2], 1, VSIZE);
     }

       glClearColor(1.0F, 1.0F, 1.0F, 1.0F);
       check_gl_error("glClearColor");
       glClear(GL_COLOR_BUFFER_BIT);
       check_gl_error("glClear");
    }

    void Renderer::draw_frame()
    {
     // Binding created FBO
     glBindFramebuffer(GL_FRAMEBUFFER, frame_buffer_object_);
     check_gl_error("glBindFramebuffer");
       // Add program to OpenGL environment
       glUseProgram(program_object_);
       check_gl_error("glUseProgram");

     for (int i = 0; i &lt; 3; i++) {
       LOG_DEBUG("Success");
         //Bind texture
         glActiveTexture(GL_TEXTURE0 + i);
         check_gl_error("glActiveTexture");
         glBindTexture(GL_TEXTURE_2D, yuv_texture_id_[i]);
         check_gl_error("glBindTexture");
         glUniform1i(yuv_texture_object_[i], i);
         check_gl_error("glBindTexture");
       glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, stream_yuv_width_[i], stream_yuv_height_[i], GL_RGBA, GL_UNSIGNED_BYTE, yuv_data_[i]);
         check_gl_error("glTexSubImage2D");
     }

     LOG_DEBUG("Success");
       // Load vertex information
       glVertexAttribPointer(position_object_, 2, GL_FLOAT, GL_FALSE, kStride, kVertexInformation);
       check_gl_error("glVertexAttribPointer");
       // Load texture information
       glVertexAttribPointer(texture_position_object_, 2, GL_SHORT, GL_FALSE, kStride, kTextureCoordinateInformation);
       check_gl_error("glVertexAttribPointer");

    LOG_DEBUG("9");
       glEnableVertexAttribArray(position_object_);
       check_gl_error("glEnableVertexAttribArray");
       glEnableVertexAttribArray(texture_position_object_);
       check_gl_error("glEnableVertexAttribArray");

     // Back to window buffer
     glBindFramebuffer(GL_FRAMEBUFFER, 0);
     check_gl_error("glBindFramebuffer");
     LOG_DEBUG("Success");
       // Draw the Square
       glDrawElements(GL_TRIANGLE_STRIP, 6, GL_UNSIGNED_SHORT, kIndicesInformation);
       check_gl_error("glDrawElements");
    }

    void Renderer::setup_render_to_texture()
    {
       glGenFramebuffers(1, &amp;frame_buffer_object_);
       check_gl_error("glGenFramebuffers");
       glBindFramebuffer(GL_FRAMEBUFFER, frame_buffer_object_);
       check_gl_error("glBindFramebuffer");
       glGenRenderbuffers(1, &amp;render_buffer_object_);
       check_gl_error("glGenRenderbuffers");
       glBindRenderbuffer(GL_RENDERBUFFER, render_buffer_object_);
       check_gl_error("glBindRenderbuffer");
       glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA4, stream_yuv_width_[0], stream_yuv_height_[0]);
       check_gl_error("glRenderbufferStorage");
       glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, render_buffer_object_);
       check_gl_error("glFramebufferRenderbuffer");
     glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, yuv_texture_id_[0], 0);
       check_gl_error("glFramebufferTexture2D");  
     glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, yuv_texture_id_[1], 0);
       check_gl_error("glFramebufferTexture2D");  
     glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, yuv_texture_id_[2], 0);
       check_gl_error("glFramebufferTexture2D");  

     glBindFramebuffer(GL_FRAMEBUFFER, 0);
       check_gl_error("glBindFramebuffer");

       GLint status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
       if (status != GL_FRAMEBUFFER_COMPLETE) {
           print_log("renderer.cpp", "setup_graphics", "FBO setting fault.", LOGERROR);
           LOG_ERROR("%d\n", status);
           return;
       }
    }

    void Renderer::setup_yuv_texture()
    {
       // Use tightly packed data
       glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
       check_gl_error("glPixelStorei");

     for (int i = 0; i &lt; 3; i++) {
       if (yuv_texture_id_[i]) {
         glDeleteTextures(1, &amp;yuv_texture_id_[i]);
         check_gl_error("glDeleteTextures");
       }
         glActiveTexture(GL_TEXTURE0+i);
         check_gl_error("glActiveTexture");
         // Generate texture object
         glGenTextures(1, &amp;yuv_texture_id_[i]);
         check_gl_error("glGenTextures");
         glBindTexture(GL_TEXTURE_2D, yuv_texture_id_[i]);
         check_gl_error("glBindTexture");
       glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
         check_gl_error("glTexParameteri");
         glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
         check_gl_error("glTexParameteri");
         glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
         check_gl_error("glTexParameterf");
         glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
         check_gl_error("glTexParameterf");
       glEnable(GL_TEXTURE_2D);
       check_gl_error("glEnable");
       glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, maximum_yuv_width_[i], maximum_yuv_height_[i], 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
         check_gl_error("glTexImage2D");
     }
    }

    void Renderer::setup_graphics()
    {
       print_gl_string("Version", GL_VERSION);
       print_gl_string("Vendor", GL_VENDOR);
       print_gl_string("Renderer", GL_RENDERER);
       print_gl_string("Extensions", GL_EXTENSIONS);

       program_object_ = create_program(kVertexShader, kFragmentShader);
       if (!program_object_) {
           print_log("renderer.cpp", "setup_graphics", "Could not create program.", LOGERROR);
           return;
       }

       position_object_ = glGetAttribLocation(program_object_, "vPosition");
       check_gl_error("glGetAttribLocation");
       texture_position_object_ = glGetAttribLocation(program_object_, "vTexCoord");
       check_gl_error("glGetAttribLocation");

       yuv_texture_object_[0] = glGetUniformLocation(program_object_, "yTexture");
       check_gl_error("glGetUniformLocation");
     yuv_texture_object_[1] = glGetUniformLocation(program_object_, "uTexture");
       check_gl_error("glGetUniformLocation");
       yuv_texture_object_[2] = glGetUniformLocation(program_object_, "vTexture");
       check_gl_error("glGetUniformLocation");

     setup_yuv_texture();
       setup_render_to_texture();

     glViewport(0, 0, stream_yuv_width_[0], stream_yuv_height_[0]);//736, 480);//1920, 1080);//maximum_yuv_width_[0], maximum_yuv_height_[0]);
     check_gl_error("glViewport");
    }

    GLuint Renderer::create_program(const char* vertex_source, const char* fragment_source)
    {
       GLuint vertexShader = load_shader(GL_VERTEX_SHADER, vertex_source);
       if (!vertexShader) {
           return 0;
       }

       GLuint pixelShader = load_shader(GL_FRAGMENT_SHADER, fragment_source);
       if (!pixelShader) {
           return 0;
       }

       GLuint program = glCreateProgram();
       if (program) {
           glAttachShader(program, vertexShader);
           check_gl_error("glAttachShader");
           glAttachShader(program, pixelShader);
           check_gl_error("glAttachShader");
           glLinkProgram(program);
           /* Get a Status */
           GLint linkStatus = GL_FALSE;
           glGetProgramiv(program, GL_LINK_STATUS, &amp;linkStatus);
           if (linkStatus != GL_TRUE) {
               GLint bufLength = 0;
               glGetProgramiv(program, GL_INFO_LOG_LENGTH, &amp;bufLength);
               if (bufLength) {
                   char* buf = (char*) malloc(bufLength);
                   if (buf) {
                       glGetProgramInfoLog(program, bufLength, NULL, buf);
                       print_log("renderer.cpp", "create_program", "Could not link program.", LOGERROR);
                       LOG_ERROR("%s\n", buf);
                       free(buf);
                   }
               }
               glDeleteProgram(program);
               program = 0;
           }
       }
       return program;
    }

    GLuint Renderer::load_shader(GLenum shaderType, const char* pSource)
    {
       GLuint shader = glCreateShader(shaderType);
           if (shader) {
               glShaderSource(shader, 1, &amp;pSource, NULL);
               glCompileShader(shader);
               /* Get a Status */
               GLint compiled = 0;
               glGetShaderiv(shader, GL_COMPILE_STATUS, &amp;compiled);
               if (!compiled) {
                   GLint infoLen = 0;
                   glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &amp;infoLen);
                   if (infoLen) {
                       char* buf = (char*) malloc(infoLen);
                       if (buf) {
                           glGetShaderInfoLog(shader, infoLen, NULL, buf);
                           print_log("renderer.cpp", "load_shader", "Could not link program.", LOGERROR);
                                     LOG_ERROR("%d :: %s\n", shaderType, buf);
                           free(buf);
                       }
                       glDeleteShader(shader);
                       shader = 0;
                   }
               }
           }
       return shader;
    }


    void Renderer::onDrawFrame(JNIEnv* jenv, jbyteArray yData, jbyteArray uData, jbyteArray vData)
    {
       set_draw_frame(jenv, yData, uData, vData);
       draw_frame();
       return;
    }

    void Renderer::setSize(int stream_width, int stream_height) {
     stream_yuv_width_[0] = stream_width;
     stream_yuv_width_[1] = stream_width/2;
     stream_yuv_width_[2] = stream_width/2;
     stream_yuv_height_[0] = stream_height;
     stream_yuv_height_[1] = stream_height/2;
     stream_yuv_height_[2] = stream_height/2;
    }

    void Renderer::onSurfaceChanged(int width, int height)
    {
     mobile_yuv_width_[0] = width;
     mobile_yuv_width_[1] = width/2;
     mobile_yuv_width_[2] = width/2;
     mobile_yuv_height_[0] = height;
     mobile_yuv_height_[1] = height/2;
     mobile_yuv_height_[2] = height/2;

     maximum_yuv_width_[0] = 1920;
     maximum_yuv_width_[1] = 1920/2;
     maximum_yuv_width_[2] = 1920/2;
     maximum_yuv_height_[0] = 1080;
     maximum_yuv_height_[1] = 1080/2;
     maximum_yuv_height_[2] = 1080/2;

     // If stream size not setting, default size D1
     //if (stream_yuv_width_[0] == 0) {
       stream_yuv_width_[0] = 736;
       stream_yuv_width_[1] = 736/2;
       stream_yuv_width_[2] = 736/2;
       stream_yuv_height_[0] = 480;
       stream_yuv_height_[1] = 480/2;
       stream_yuv_height_[2] = 480/2;
     //}

       setup_graphics();
       return;
    }
    </unsigned></unsigned></unsigned>

    Here is my Fragment, Vertex source and coordinates :

    static const char kVertexShader[] =
       "attribute vec4 vPosition;      \n"
         "attribute vec2 vTexCoord;        \n"
         "varying vec2 v_vTexCoord;        \n"
       "void main() {                        \n"
           "gl_Position = vPosition;       \n"
           "v_vTexCoord = vTexCoord;       \n"
       "}                                          \n";

    static const char kFragmentShader[] =
           "precision mediump float;               \n"
           "varying vec2 v_vTexCoord;          \n"
           "uniform sampler2D yTexture;        \n"
           "uniform sampler2D uTexture;        \n"
           "uniform sampler2D vTexture;        \n"
           "void main() {                      \n"
               "float y=texture2D(yTexture, v_vTexCoord).r;\n"
               "float u=texture2D(uTexture, v_vTexCoord).r - 0.5;\n"
               "float v=texture2D(vTexture, v_vTexCoord).r - 0.5;\n"
               "float r=y + 1.13983 * v;\n"
               "float g=y - 0.39465 * u - 0.58060 * v;\n"
               "float b=y + 2.03211 * u;\n"
               "gl_FragColor = vec4(r, g, b, 1.0);\n"
           "}\n";

    static const GLfloat kVertexInformation[] =
    {
            -1.0f, 1.0f,           // TexCoord 0 top left
            -1.0f,-1.0f,           // TexCoord 1 bottom left
             1.0f,-1.0f,           // TexCoord 2 bottom right
             1.0f, 1.0f            // TexCoord 3 top right
    };
    static const GLshort kTextureCoordinateInformation[] =
    {
             0, 0,         // TexCoord 0 top left
             0, 1,         // TexCoord 1 bottom left
             1, 1,         // TexCoord 2 bottom right
             1, 0          // TexCoord 3 top right
    };
    static const GLuint kStride = 0;//COORDS_PER_VERTEX * 4;
    static const GLshort kIndicesInformation[] =
    {
       0, 1, 2,
       0, 2, 3
    };