Recherche avancée

Médias (0)

Mot : - Tags -/médias

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (89)

  • Modifier la date de publication

    21 juin 2013, par

    Comment changer la date de publication d’un média ?
    Il faut au préalable rajouter un champ "Date de publication" dans le masque de formulaire adéquat :
    Administrer > Configuration des masques de formulaires > Sélectionner "Un média"
    Dans la rubrique "Champs à ajouter, cocher "Date de publication "
    Cliquer en bas de la page sur Enregistrer

  • Configuration spécifique pour PHP5

    4 février 2011, par

    PHP5 est obligatoire, vous pouvez l’installer en suivant ce tutoriel spécifique.
    Il est recommandé dans un premier temps de désactiver le safe_mode, cependant, s’il est correctement configuré et que les binaires nécessaires sont accessibles, MediaSPIP devrait fonctionner correctement avec le safe_mode activé.
    Modules spécifiques
    Il est nécessaire d’installer certains modules PHP spécifiques, via le gestionnaire de paquet de votre distribution ou manuellement : php5-mysql pour la connectivité avec la (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (7433)

  • ffmpeg C api cutting a video when packet dts is greater than pts

    10 mars 2017, par TastyCatFood

    Corrupted videos

    In trying to cut out a duration of one of videos using ffmpeg C api, using the code posted here : How to cut video with FFmpeg C API , ffmpeg spit out the log below :

    D/logger: Loop count:9   out: pts:0 pts_time:0 dts:2002 dts_time:0.0333667 duration:2002 duration_time:0.0333667 stream_index:1
    D/trim_video: Error muxing packet Invalid argument

    ffmpeg considercs an instruction to decompress a frame after presenting it to be a nonsense, which is well...reasonable but stringent.

    My VLC player finds the video alright and plays it of course.

    Note :

    The code immediately below is in c++ written to be compiled with g++ as I’m developing for android. For C code, scroll down further.

    My solution(g++) :

    extern "C" {
    #include "libavformat/avformat.h"
    #include "libavutil/mathematics.h"
    #include "libavutil/timestamp.h"



    static void log_packet(
           const AVFormatContext *fmt_ctx,
           const AVPacket *pkt, const char *tag,
           long count=0)
    {

       printf("loop count %d pts:%f dts:%f duration:%f stream_index:%d\n",
              count,
              static_cast<double>(pkt->pts),
              static_cast<double>(pkt->dts),
              static_cast<double>(pkt->duration),
              pkt->stream_index);
       return;
    }

    int trimVideo(
           const char* in_filename,
           const char* out_filename,
           double cutFrom,
           double cutUpTo)
    {
       AVOutputFormat *ofmt = NULL;
       AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
       AVPacket pkt;
       int ret, i;
       //jboolean  copy = true;
       //const char *in_filename = env->GetStringUTFChars(jstring_in_filename,&amp;copy);
       //const char *out_filename = env->GetStringUTFChars(jstring_out_filename,&amp;copy);
       long loopCount = 0;

       av_register_all();

       // Cutting may change the pts and dts of the resulting video;
       // if frames in head position are removed.
       // In the case like that, src stream's copy start pts
       // need to be recorded and is used to compute the new pts value.
       // e.g.
       //    new_pts = current_pts - trim_start_position_pts;

       // nb-streams is the number of elements in AVFormatContext.streams.
       // Initial pts value must be recorded for each stream.

       //May be malloc and memset should be replaced with [].
       int64_t *dts_start_from = NULL;
       int64_t *pts_start_from = NULL;

       if ((ret = avformat_open_input(&amp;ifmt_ctx, in_filename, 0, 0)) &lt; 0) {
           printf( "Could not open input file '%s'", in_filename);
           goto end;
       }

       if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) &lt; 0) {
           printf("Failed to retrieve input stream information");
           goto end;
       }

       av_dump_format(ifmt_ctx, 0, in_filename, 0);

       avformat_alloc_output_context2(&amp;ofmt_ctx, NULL, NULL, out_filename);
       if (!ofmt_ctx) {
           printf( "Could not create output context\n");
           ret = AVERROR_UNKNOWN;
           goto end;
       }

       ofmt = ofmt_ctx->oformat;

       //preparing streams
       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
           AVStream *in_stream = ifmt_ctx->streams[i];
           AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
           if (!out_stream) {
               printf( "Failed allocating output stream\n");
               ret = AVERROR_UNKNOWN;
               goto end;
           }

           ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
           if (ret &lt; 0) {
               printf( "Failed to copy context from input to output stream codec context\n");
               goto end;
           }
           out_stream->codec->codec_tag = 0;
           if (ofmt_ctx->oformat->flags &amp; AVFMT_GLOBALHEADER)
               out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
       }
       av_dump_format(ofmt_ctx, 0, out_filename, 1);

       if (!(ofmt->flags &amp; AVFMT_NOFILE)) {
           ret = avio_open(&amp;ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
           if (ret &lt; 0) {
               printf( "Could not open output file '%s'", out_filename);
               goto end;
           }
       }
       //preparing the header
       ret = avformat_write_header(ofmt_ctx, NULL);
       if (ret &lt; 0) {
           printf( "Error occurred when opening output file\n");
           goto end;
       }

       // av_seek_frame translates AV_TIME_BASE into an appropriate time base.
       ret = av_seek_frame(ifmt_ctx, -1, cutFrom*AV_TIME_BASE, AVSEEK_FLAG_ANY);
       if (ret &lt; 0) {
           printf( "Error seek\n");
           goto end;
       }
       dts_start_from = static_cast(
               malloc(sizeof(int64_t) * ifmt_ctx->nb_streams));
       memset(dts_start_from, 0, sizeof(int64_t) * ifmt_ctx->nb_streams);
       pts_start_from = static_cast(
               malloc(sizeof(int64_t) * ifmt_ctx->nb_streams));
       memset(pts_start_from, 0, sizeof(int64_t) * ifmt_ctx->nb_streams);

       //writing
       while (1) {
           AVStream *in_stream, *out_stream;
           //reading frame into pkt
           ret = av_read_frame(ifmt_ctx, &amp;pkt);
           if (ret &lt; 0)
               break;
           in_stream  = ifmt_ctx->streams[pkt.stream_index];
           out_stream = ofmt_ctx->streams[pkt.stream_index];

           //if end reached
           if (av_q2d(in_stream->time_base) * pkt.pts > cutUpTo) {
               av_packet_unref(&amp;pkt);
               break;
           }


           // Recording the initial pts value for each stream
           // Recording dts does not do the trick because AVPacket.dts values
           // in some video files are larger than corresponding pts values
           // and ffmpeg does not like it.
           if (dts_start_from[pkt.stream_index] == 0) {
               dts_start_from[pkt.stream_index] = pkt.pts;
               printf("dts_initial_value: %f for stream index: %d \n",
                       static_cast<double>(dts_start_from[pkt.stream_index]),
                                   pkt.stream_index

               );
           }
           if (pts_start_from[pkt.stream_index] == 0) {
               pts_start_from[pkt.stream_index] = pkt.pts;
               printf( "pts_initial_value:  %f for stream index %d\n",
                       static_cast<double>(pts_start_from[pkt.stream_index]),
                                   pkt.stream_index);
           }

           log_packet(ifmt_ctx, &amp;pkt, "in",loopCount);

           /* Computes pts etc
            *      av_rescale_q_rend etc are countering changes in time_base between
            *      out_stream and in_stream, so regardless of time_base values for
            *      in and out streams, the rate at which frames are refreshed remains
            *      the same.
            *
                   pkt.pts = pkt.pts * (in_stream->time_base/ out_stream->time_base)
                   As `time_base == 1/frame_rate`, the above is an equivalent of

                   (out_stream_frame_rate/in_stream_frame_rate)*pkt.pts where
                   frame_rate is the number of frames to be displayed per second.

                   AV_ROUND_PASS_MINMAX may set pts or dts to AV_NOPTS_VALUE
            * */


           pkt.pts =
                   av_rescale_q_rnd(
                   pkt.pts - pts_start_from[pkt.stream_index],
                   static_cast<avrational>(in_stream->time_base),
                   static_cast<avrational>(out_stream->time_base),
                   static_cast<avrounding>(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
           pkt.dts =
                   av_rescale_q_rnd(
                   pkt.dts - dts_start_from[pkt.stream_index],
                   static_cast<avrational>(in_stream->time_base),
                   static_cast<avrational>(out_stream->time_base),
                   static_cast<avrounding>(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));

           if(pkt.dts>pkt.pts) pkt.dts = pkt.pts -1;
           if(pkt.dts &lt; 0) pkt.dts = 0;
           if(pkt.pts &lt; 0) pkt.pts = 0;

           pkt.duration = av_rescale_q(
                   pkt.duration,
                   in_stream->time_base,
                   out_stream->time_base);
           pkt.pos = -1;
           log_packet(ofmt_ctx, &amp;pkt, "out",loopCount);

           // Writes to the file after buffering packets enough to generate a frame
           // and probably sorting packets in dts order.
           ret = av_interleaved_write_frame(ofmt_ctx, &amp;pkt);
    //        ret = av_write_frame(ofmt_ctx, &amp;pkt);
           if (ret &lt; 0) {
               printf( "Error muxing packet %d \n", ret);
               //continue;
               break;
           }
           av_packet_unref(&amp;pkt);
           ++loopCount;
       }

       //Writing end code?
       av_write_trailer(ofmt_ctx);

       end:
       avformat_close_input(&amp;ifmt_ctx);

       if(dts_start_from)free(dts_start_from);
       if(pts_start_from)free(pts_start_from);

       /* close output */
       if (ofmt_ctx &amp;&amp; !(ofmt->flags &amp; AVFMT_NOFILE))
           avio_closep(&amp;ofmt_ctx->pb);
       avformat_free_context(ofmt_ctx);

       if (ret &lt; 0 &amp;&amp; ret != AVERROR_EOF) {
           //printf( "Error occurred: %s\n", av_err2str(ret));
           return 1;
       }

       return 0;
    }

    }
    </avrounding></avrational></avrational></avrounding></avrational></avrational></double></double></double></double></double>

    c compatible(Console says g++ but I’m sure this is C code)

    #include "libavformat/avformat.h"
    #include "libavutil/mathematics.h"
    #include "libavutil/timestamp.h"



    static void log_packet(
           const AVFormatContext *fmt_ctx,
           const AVPacket *pkt, const char *tag,
           long count)
    {

       printf("loop count %d pts:%f dts:%f duration:%f stream_index:%d\n",
              count,
              (double)pkt->pts,
              (double)pkt->dts,
              (double)pkt->duration,
              pkt->stream_index);
       return;
    }

    int trimVideo(
           const char* in_filename,
           const char* out_filename,
           double cutFrom,
           double cutUpTo)
    {
       AVOutputFormat *ofmt = NULL;
       AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
       AVPacket pkt;
       int ret, i;
       //jboolean  copy = true;
       //const char *in_filename = env->GetStringUTFChars(jstring_in_filename,&amp;copy);
       //const char *out_filename = env->GetStringUTFChars(jstring_out_filename,&amp;copy);
       long loopCount = 0;

       av_register_all();

       // Cutting may change the pts and dts of the resulting video;
       // if frames in head position are removed.
       // In the case like that, src stream's copy start pts
       // need to be recorded and is used to compute the new pts value.
       // e.g.
       //    new_pts = current_pts - trim_start_position_pts;

       // nb-streams is the number of elements in AVFormatContext.streams.
       // Initial pts value must be recorded for each stream.

       //May be malloc and memset should be replaced with [].
       int64_t *dts_start_from = NULL;
       int64_t *pts_start_from = NULL;

       if ((ret = avformat_open_input(&amp;ifmt_ctx, in_filename, 0, 0)) &lt; 0) {
           printf( "Could not open input file '%s'", in_filename);
           goto end;
       }

       if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) &lt; 0) {
           printf("Failed to retrieve input stream information");
           goto end;
       }

       av_dump_format(ifmt_ctx, 0, in_filename, 0);

       avformat_alloc_output_context2(&amp;ofmt_ctx, NULL, NULL, out_filename);
       if (!ofmt_ctx) {
           printf( "Could not create output context\n");
           ret = AVERROR_UNKNOWN;
           goto end;
       }

       ofmt = ofmt_ctx->oformat;

       //preparing streams
       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++) {
           AVStream *in_stream = ifmt_ctx->streams[i];
           AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
           if (!out_stream) {
               printf( "Failed allocating output stream\n");
               ret = AVERROR_UNKNOWN;
               goto end;
           }

           ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
           if (ret &lt; 0) {
               printf( "Failed to copy context from input to output stream codec context\n");
               goto end;
           }
           out_stream->codec->codec_tag = 0;
           if (ofmt_ctx->oformat->flags &amp; AVFMT_GLOBALHEADER)
               out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
       }
       av_dump_format(ofmt_ctx, 0, out_filename, 1);

       if (!(ofmt->flags &amp; AVFMT_NOFILE)) {
           ret = avio_open(&amp;ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
           if (ret &lt; 0) {
               printf( "Could not open output file '%s'", out_filename);
               goto end;
           }
       }
       //preparing the header
       ret = avformat_write_header(ofmt_ctx, NULL);
       if (ret &lt; 0) {
           printf( "Error occurred when opening output file\n");
           goto end;
       }

       // av_seek_frame translates AV_TIME_BASE into an appropriate time base.
       ret = av_seek_frame(ifmt_ctx, -1, cutFrom*AV_TIME_BASE, AVSEEK_FLAG_ANY);
       if (ret &lt; 0) {
           printf( "Error seek\n");
           goto end;
       }
       dts_start_from = (int64_t*)
               malloc(sizeof(int64_t) * ifmt_ctx->nb_streams);
       memset(dts_start_from, 0, sizeof(int64_t) * ifmt_ctx->nb_streams);
       pts_start_from = (int64_t*)
               malloc(sizeof(int64_t) * ifmt_ctx->nb_streams);
       memset(pts_start_from, 0, sizeof(int64_t) * ifmt_ctx->nb_streams);

       //writing
       while (1) {
           AVStream *in_stream, *out_stream;
           //reading frame into pkt
           ret = av_read_frame(ifmt_ctx, &amp;pkt);
           if (ret &lt; 0)
               break;
           in_stream  = ifmt_ctx->streams[pkt.stream_index];
           out_stream = ofmt_ctx->streams[pkt.stream_index];

           //if end reached
           if (av_q2d(in_stream->time_base) * pkt.pts > cutUpTo) {
               av_packet_unref(&amp;pkt);
               break;
           }


           // Recording the initial pts value for each stream
           // Recording dts does not do the trick because AVPacket.dts values
           // in some video files are larger than corresponding pts values
           // and ffmpeg does not like it.
           if (dts_start_from[pkt.stream_index] == 0) {
               dts_start_from[pkt.stream_index] = pkt.pts;
               printf("dts_initial_value: %f for stream index: %d \n",
                       (double)dts_start_from[pkt.stream_index],
                                   pkt.stream_index

               );
           }
           if (pts_start_from[pkt.stream_index] == 0) {
               pts_start_from[pkt.stream_index] = pkt.pts;
               printf( "pts_initial_value:  %f for stream index %d\n",
                       (double)pts_start_from[pkt.stream_index],
                                   pkt.stream_index);
           }

           log_packet(ifmt_ctx, &amp;pkt, "in",loopCount);

           /* Computes pts etc
            *      av_rescale_q_rend etc are countering changes in time_base between
            *      out_stream and in_stream, so regardless of time_base values for
            *      in and out streams, the rate at which frames are refreshed remains
            *      the same.
            *
                   pkt.pts = pkt.pts * (in_stream->time_base/ out_stream->time_base)
                   As `time_base == 1/frame_rate`, the above is an equivalent of

                   (out_stream_frame_rate/in_stream_frame_rate)*pkt.pts where
                   frame_rate is the number of frames to be displayed per second.

                   AV_ROUND_PASS_MINMAX may set pts or dts to AV_NOPTS_VALUE
            * */


           pkt.pts =
                   av_rescale_q_rnd(
                   pkt.pts - pts_start_from[pkt.stream_index],
                   (AVRational)in_stream->time_base,
                   (AVRational)out_stream->time_base,
                   (AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));
           pkt.dts =
                   av_rescale_q_rnd(
                   pkt.dts - dts_start_from[pkt.stream_index],
                   (AVRational)in_stream->time_base,
                   (AVRational)out_stream->time_base,
                   AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);

           if(pkt.dts>pkt.pts) pkt.dts = pkt.pts -1;
           if(pkt.dts &lt; 0) pkt.dts = 0;
           if(pkt.pts &lt; 0) pkt.pts = 0;

           pkt.duration = av_rescale_q(
                   pkt.duration,
                   in_stream->time_base,
                   out_stream->time_base);
           pkt.pos = -1;
           log_packet(ofmt_ctx, &amp;pkt, "out",loopCount);

           // Writes to the file after buffering packets enough to generate a frame
           // and probably sorting packets in dts order.
           ret = av_interleaved_write_frame(ofmt_ctx, &amp;pkt);
    //        ret = av_write_frame(ofmt_ctx, &amp;pkt);
           if (ret &lt; 0) {
               printf( "Error muxing packet %d \n", ret);
               //continue;
               break;
           }
           av_packet_unref(&amp;pkt);
           ++loopCount;
       }

       //Writing end code?
       av_write_trailer(ofmt_ctx);

       end:
       avformat_close_input(&amp;ifmt_ctx);

       if(dts_start_from)free(dts_start_from);
       if(pts_start_from)free(pts_start_from);

       /* close output */
       if (ofmt_ctx &amp;&amp; !(ofmt->flags &amp; AVFMT_NOFILE))
           avio_closep(&amp;ofmt_ctx->pb);
       avformat_free_context(ofmt_ctx);

       if (ret &lt; 0 &amp;&amp; ret != AVERROR_EOF) {
           //printf( "Error occurred: %s\n", av_err2str(ret));
           return 1;
       }

       return 0;
    }

    What is the problem

    My code does not produce the error because I’m doing new_dts = current_dts - initial_pts_for_current_stream. It works but now dts values are not properly computed.

    How to recalculate dts properly ?

    P.S

    Since Olaf seems to have a very strong opinion, posting the build console message for my main.c.
    I don’t really know C or C++ but GNU gcc seems to be calling gcc for compiling and g++ for linking.
    Well, the extension for my main is now .c and the compiler being called is gcc, so that should at least mean I have got a code written in C language...

    ------------- Build: Debug in videoTrimmer (compiler: GNU GCC Compiler)---------------

    gcc -Wall -fexceptions -std=c99 -g -I/home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include -I/usr/include -I/usr/local/include -c /home/d/CodeBlockWorkplace/videoTrimmer/main.c -o obj/Debug/main.o
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c: In function ‘log_packet’:
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c:15:12: warning: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘long int’ [-Wformat=]
        printf("loop count %d pts:%f dts:%f duration:%f stream_index:%d\n",
               ^
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c: In function ‘trimVideo’:
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c:79:9: warning: ‘codec’ is deprecated [-Wdeprecated-declarations]
            AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
            ^
    In file included from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:0:
    /home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:893:21: note: declared here
        AVCodecContext *codec;
                        ^
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c:86:9: warning: ‘avcodec_copy_context’ is deprecated [-Wdeprecated-declarations]
            ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
            ^
    In file included from /home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:319:0,
                    from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:
    /home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavcodec/avcodec.h:4286:5: note: declared here
    int avcodec_copy_context(AVCodecContext *dest, const AVCodecContext *src);
        ^
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c:86:9: warning: ‘codec’ is deprecated [-Wdeprecated-declarations]
            ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
            ^
    In file included from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:0:
    /home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:893:21: note: declared here
        AVCodecContext *codec;
                        ^
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c:86:9: warning: ‘codec’ is deprecated [-Wdeprecated-declarations]
            ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
            ^
    In file included from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:0:
    /home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:893:21: note: declared here
        AVCodecContext *codec;
                        ^
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c:91:9: warning: ‘codec’ is deprecated [-Wdeprecated-declarations]
            out_stream->codec->codec_tag = 0;
            ^
    In file included from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:0:
    /home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:893:21: note: declared here
        AVCodecContext *codec;
                        ^
    /home/d/CodeBlockWorkplace/videoTrimmer/main.c:93:13: warning: ‘codec’ is deprecated [-Wdeprecated-declarations]
                out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
                ^
    In file included from /home/d/CodeBlockWorkplace/videoTrimmer/main.c:3:0:
    /home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/include/libavformat/avformat.h:893:21: note: declared here
        AVCodecContext *codec;
                        ^
    g++ -L/home/d/Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib -L/usr/lib -L/usr/local/lib -o bin/Debug/videoTrimmer obj/Debug/main.o   ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libavformat.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libavcodec.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libavutil.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libswresample.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libswscale.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libavfilter.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libpostproc.a ../../Android/Sdk/ndk-bundle/sources/FFmpeg/local/lib/libavdevice.a -lX11 -lvdpau -lva -lva-drm -lva-x11 -ldl -lpthread -lz -llzma -lx264
    Output file is bin/Debug/videoTrimmer with size 77.24 MB
    Process terminated with status 0 (0 minute(s), 16 second(s))
    0 error(s), 8 warning(s) (0 minute(s), 16 second(s))
  • ffmpeg : change alpha channel of a showwaves filter [on hold]

    15 février 2014, par Znuff

    I've been trying to figure this issue out for a few hours and I can't seem to find any solution.

    I'm creating a video from an .mp3 and some images with the following command

    fmpeg.exe -y -i temp\audio.mp3 -loop 1 -i Bokeh\frame-%03d.png -r 25 -filter_complex "[0:a] showwaves=size=1280x100:mode=line:r=25[wave];[1:v][wave] overlay=y=H-h:eval=init[canvas];[canvas]drawtext=fontfile=&#39;./tools/impact.ttf&#39;:fontsize=42:text=&#39;ORGANIKISMNESS&#39;:x=20:y=(h-170-text_h*2.20):fontcolor=white:shadowy=2:shadowx=2:shadowcolor=black,drawtext=fontfile=&#39;./tools/impact.ttf&#39;:fontsize=42:text=&#39;RETURN TO THE SOURCE PT.2 (ORGANIKISMNESS REMIX)&#39;:x=20:y=(h-170-text_h):fontcolor=white:shadowy=2:shadowx=2:shadowcolor=black" -shortest -acodec copy -vcodec libx264 -pix_fmt yuv420p -preset ultrafast -tune stillimage -crf 19 -movflags faststart "videos\Organikismness-Return to the Source Pt.2 (Organikismness Remix).mp4"

    I'm trying to make the [wave] (showwaves) filter have some sort of alpha channel, to be slightly transparent to be overlayed on the rest of the video later.

    So far I've tried the blend filter, but this complains that the sources are not the same size (one is 1280x720, the showwaves source is 1280x100).

    I tried the colorchannelmixer filter, but I couldn't figure out how this should work.

    Anyone has any idea how to do it ?

  • How to playback RAW video and audio in VLC ?

    24 février 2014, par Lane

    I have 2 files...

    • RAW H264 video
    • RAW PCM audio (uncompressed from PCM Mu Law)

    ...and I am looking to be able to play them in a Java application (using VLCJ possibly). I am able to run the ffmpeg command...

    • ffmpeg -i video -i audio -preset ultrafast movie.mp4

    ...to generate a mp4, but it takes 1/8 of the source length (it takes 1 min to generate a movie for 8 min of RAW data). My problem is that this is not fast enough for me, so I am trying to playback with the RAW sources. I can playback the video with the VLC command...

    • vlc video —demux=h264 (if I don't specify this flag, it doesn't work)

    ...and it plays correctly, but gives me the error...

    [0x10028bbe0] main interface error : no suitable interface module
    [0x10021d4a0] main libvlc : Running vlc with the default interface. Use 'cvlc' to use vlc without interface.
    [0x10aa14950] h264 demux error : this doesn't look like a H264 ES stream, continuing anyway
    [0x1003ccb50] main input error : Invalid PCR value in ES_OUT_SET_(GROUP_)PCR !
    shader program 1 : WARNING : Output of vertex shader 'TexCoord1' not read by fragment shader
    WARNING : Output of vertex shader 'TexCoord2' not read by fragment shader

    ...similarly, I can play the RAW audio with the VLC command...

    • vlc audio (note that I do not need to specify the —demux flag)

    ...so, what I am looking for is...

    1. How to playback the RAW audio and video together using the VLC CLI ?
    2. Recommendations for a Java Application solution ?

    ...thanks !