Recherche avancée

Médias (0)

Mot : - Tags -/médias

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (51)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

Sur d’autres sites (4460)

  • aacenc_pred : rework the way prediction is done

    29 août 2015, par Rostislav Pehlivanov
    aacenc_pred : rework the way prediction is done
    

    This commit completely alters the algorithm of prediction.
    The original commit which introduced prediction was completely
    incorrect to even remotely care about what the actual coefficients
    contain or whether any options were enabled. Not my actual fault.

    This commit treats prediction the way the decoder does and expects
    to do : like lossy encryption. Everything related to prediction now
    happens at the very end but just before quantization and encoding
    of coefficients. On the decoder side, prediction happens before
    anything has had a chance to even access the coefficients.

    Also the original implementation had problems because it actually
    touched the band_type of special bands which already had their
    scalefactor indices marked and it’s a wonder the asserion wasn’t
    triggered when transmitting those.

    Overall, this now drastically increases audio quality and you should
    think about enabling it if you don’t plan on playing anything encoded
    on really old low power ultra-embedded devices since they might not
    support decoding of prediction or AAC-Main. Though the specifications
    were written ages ago and as times change so do the FLOPS.

    Signed-off-by : Rostislav Pehlivanov <atomnuker@gmail.com>

    • [DH] libavcodec/aac.h
    • [DH] libavcodec/aaccoder.c
    • [DH] libavcodec/aacenc.c
    • [DH] libavcodec/aacenc.h
    • [DH] libavcodec/aacenc_pred.c
    • [DH] libavcodec/aacenc_pred.h
  • Huge memory leak when filtering video with libavfilter

    29 mai 2017, par Captain Jack

    I have a relatively simple FFMPEG C program, to which a video frame is fed, processed via filter graph and sent to frame renderer.

    Here are some code snippets :

    /* Filter graph here */
    char args[512];
    enum AVPixelFormat pix_fmts[] = {AV_PIX_FMT_RGB32 };    
    AVFilterGraph   *filter_graph;
    avfilter_register_all();
    AVFilter *buffersrc  = avfilter_get_by_name("buffer");
    AVFilter *buffersink = avfilter_get_by_name("ffbuffersink");
    AVBufferSinkParams *buffersink_params;
    AVFilterInOut *outputs = avfilter_inout_alloc();
    AVFilterInOut *inputs  = avfilter_inout_alloc();
    filter_graph = avfilter_graph_alloc();

    snprintf(args, sizeof(args),
           "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
           av->codec_ctx->width, av->codec_ctx->height, av->codec_ctx->pix_fmt,
           av->codec_ctx->time_base.num, av->codec_ctx->time_base.den,
           av->codec_ctx->sample_aspect_ratio.num, av->codec_ctx->sample_aspect_ratio.den);

    if(avfilter_graph_create_filter(&amp;av->buffersrc_ctx, buffersrc, "in",args, NULL, filter_graph) &lt; 0)
    {
       fprintf(stderr, "Cannot create buffer source\n");
       return(0);
    }

    /* buffer video sink: to terminate the filter chain. */
    buffersink_params = av_buffersink_params_alloc();
    buffersink_params->pixel_fmts = pix_fmts;

    if(avfilter_graph_create_filter(&amp;av->buffersink_ctx, buffersink, "out",NULL, buffersink_params, filter_graph) &lt; 0)
    {
       printf("Cannot create buffer sink\n");
       return(HACKTV_ERROR);
    }

     /* Endpoints for the filter graph. */
       outputs->name       = av_strdup("in");
       outputs->filter_ctx = av->buffersrc_ctx;
       outputs->pad_idx    = 0;
       outputs->next       = NULL;

       inputs->name       = av_strdup("out");
       inputs->filter_ctx = av->buffersink_ctx;
       inputs->pad_idx    = 0;
       inputs->next       = NULL;

    const char *filter_descr = "vflip";

       if (avfilter_graph_parse_ptr(filter_graph, filter_descr, &amp;inputs, &amp;outputs, NULL) &lt; 0)
    {
       printf("Cannot parse filter graph\n");
       return(0);
    }

    if (avfilter_graph_config(filter_graph, NULL) &lt; 0)
    {
       printf("Cannot configure filter graph\n");
       return(0);
    }

    av_free(buffersink_params);
    avfilter_inout_free(&amp;inputs);
    avfilter_inout_free(&amp;outputs);

    The above code is called by these elsewhere :

    av->frame_in->pts = av_frame_get_best_effort_timestamp(av->frame_in);

    /* push the decoded frame into the filtergraph*/
    if (av_buffersrc_add_frame(av->buffersrc_ctx, av->frame_in) &lt; 0)
    {
       printf( "Error while feeding the filtdergraph\n");
       break;
    }

    /* pull filtered pictures from the filtergraph */
    if(av_buffersink_get_frame(av->buffersink_ctx, av->frame_out) &lt; 0)
    {
         printf( "Error while sourcing the filtergraph\n");
          break;
     }  

    /* do stuff with frame */

    Now, the code works absolutely fine and the video comes out the way I expect it to (vertically flipped for testing purposes).

    The biggest issue I have is that there is a massive memory leak. An high res video will consume 2Gb in a matter of seconds and crash the program. I traced the leak to this piece of code :

    /* push the decoded frame into the filtergraph*/
    if (av_buffersrc_add_frame(av->buffersrc_ctx, av->frame_in) &lt; 0)

    If I bypass the filter by doing av->frame_out=av->frame_in; without pushing the frame into it (and obviously not pulling from it), there is no leak and memory usage is stable.

    Now, I am very new to C, so be gentle, but it seems like I should be clearing out the buffersrc_ctx somehow but no idea how. I’ve looked in official documentations but couldn’t find anything.

    Can someone advise ?

  • What is the best way to merge .mkv and .mka files using ffmpeg ?

    28 juin 2017, par Robert

    I’m using ffmpeg to merge .mkv and .mka files into .mp4 files. My current command looks like this :

    ffmpeg -i video.mkv -i audio.mka output_path.mp4

    The audio and video files are pre-signed urls from Amazon S3. Even on a server with sufficient resources, this process is going very slowly. I’ve researched situations where you can tell ffmpeg to skip re-encoding each frame, but I think that in my situation it actually does need to re-encode each frame.

    I’ve downloaded 2 sample files to my macbook pro and have installed ffmpeg locally via homebrew. When I run the command

    ffmpeg -i video.mkv -i audio.mka -c copy output.mp4

    I get the following output :

    ffmpeg version 3.3.2 Copyright (c) 2000-2017 the FFmpeg developers
     built with Apple LLVM version 8.1.0 (clang-802.0.42)
     configuration: --prefix=/usr/local/Cellar/ffmpeg/3.3.2 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libmp3lame --enable-libx264 --enable-libxvid --enable-opencl --disable-lzma --enable-vda
     libavutil      55. 58.100 / 55. 58.100
     libavcodec     57. 89.100 / 57. 89.100
     libavformat    57. 71.100 / 57. 71.100
     libavdevice    57.  6.100 / 57.  6.100
     libavfilter     6. 82.100 /  6. 82.100
     libavresample   3.  5.  0 /  3.  5.  0
     libswscale      4.  6.100 /  4.  6.100
     libswresample   2.  7.100 /  2.  7.100
     libpostproc    54.  5.100 / 54.  5.100
    Input #0, matroska,webm, from '319_audio_1498590673766.mka':
     Metadata:
       encoder         : GStreamer matroskamux version 1.8.1.1
       creation_time   : 2017-06-27T19:10:58.000000Z
     Duration: 00:00:03.53, start: 2.831000, bitrate: 50 kb/s
       Stream #0:0(eng): Audio: opus, 48000 Hz, stereo, fltp (default)
       Metadata:
         title           : Audio
    Input #1, matroska,webm, from '319_video_1498590673766.mkv':
     Metadata:
       encoder         : GStreamer matroskamux version 1.8.1.1
       creation_time   : 2017-06-27T19:10:58.000000Z
     Duration: 00:00:03.97, start: 2.851000, bitrate: 224 kb/s
       Stream #1:0(eng): Video: vp8, yuv420p(progressive), 640x480, SAR 1:1 DAR 4:3, 30 tbr, 1k tbn, 1k tbc (default)
       Metadata:
         title           : Video
    [mp4 @ 0x7fa4f0806800] Could not find tag for codec vp8 in stream #0, codec not currently supported in container
    Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
    Stream mapping:
     Stream #1:0 -> #0:0 (copy)
     Stream #0:0 -> #0:1 (copy)
       Last message repeated 1 times

    So it appears that the specific encodings I’m working with are vp8 videos and opus audio files, which I believe are incompatible with the .mp4 output container. I would appreciate answers that cover ways of optimally merging vp8 and opus into .mp4 output or answers that point me in the direction of output media formats that are both compatible with vp8 & opus and are playable on web and mobile devices so that I can bypass the re-encoding step altogether.

    EDIT :

    Just wanted to provide a benchmark after following LordNeckbeard’s advice :

    4 min 41 second video transcoded locally on my mac

    LordNeckbeard’s approach : 15 mins 55 seconds (955 seconds)
    Current approach : 18 mins 49 seconds (1129 seconds)

    18% speed increase