Recherche avancée

Médias (91)

Autres articles (54)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Diogene : création de masques spécifiques de formulaires d’édition de contenus

    26 octobre 2010, par

    Diogene est un des plugins ? SPIP activé par défaut (extension) lors de l’initialisation de MediaSPIP.
    A quoi sert ce plugin
    Création de masques de formulaires
    Le plugin Diogène permet de créer des masques de formulaires spécifiques par secteur sur les trois objets spécifiques SPIP que sont : les articles ; les rubriques ; les sites
    Il permet ainsi de définir en fonction d’un secteur particulier, un masque de formulaire par objet, ajoutant ou enlevant ainsi des champs afin de rendre le formulaire (...)

Sur d’autres sites (3004)

  • Reusing Decoder instance for multiple png files [ffmpeg]

    21 mars 2016, par gaurav

    In my code i am decoding multiple png files. for each file i create a AVCodecContext instance , open the codec , decode the file and finally close the codec. I tried reusing the same AVCodecContext instance for all the png files but it doesn’t work. i get bugged out (random colours) images when I display them on screen. Below is my code which works but it creates AVCodecContext object every time given function is called.

    int ImageSequence::decodeFromByteArray(uint8_t *inbuf , AVFrame **frame , int dataLength){
       AVCodec *codec = avcodec_find_decoder(AV_CODEC_ID_PNG);

       AVCodecContext *codecCtx = avcodec_alloc_context3(codec);

       if(!codecCtx){
           av_log( NULL , AV_LOG_ERROR , "Cannot allocate codec ctx \n");
           return -1;
       }

       if(codec->capabilities&CODEC_CAP_TRUNCATED)
       {
           codecCtx->flags|= CODEC_FLAG_TRUNCATED; /* we do not send complete frames */
       }


       if (avcodec_open2(codecCtx, codec, NULL) < 0){
           av_log( NULL , AV_LOG_ERROR , "cannot open decoder \n");
           return -1;
       }

       AVPacket pkt;
       int ret;

       av_init_packet(&pkt);

       pkt.size = dataLength;
       pkt.data = inbuf;

       int got_frame , len;

       len = avcodec_decode_video2(codecCtx, *frame, &got_frame, &pkt);

       if(got_frame){
           ret = 1;
       }else{
           ret = -1;
       }

       av_free_packet(&pkt);
       //free decoder resources
       avcodec_free_context(&codecCtx);
       avcodec_close(codecCtx);

       return ret;
    }

    Couple of questions -
    1) Is it possible to use same AVCodecContext object for multiple image files.
    2)If yes then what i should do to reset the state of decoder after decoding each image file.

  • FFMPEG Blending for VP9 format videos

    28 novembre 2022, par OneWorld

    I get a darker background, almost like a placeholder for the asset that is to be blended, whereas the background should be transparent, and should show the same colour as the rest of the background.

    


    I have two webm vp9 files that I am trying to blend using FFMPEG blending functionality.

    


    One of the videos is a zoom animation that starts from 1 pixel in size and then increases in size to 50 x 50 pixels.

    


    The other is a solid red background video 640 px x 360 px

    


    At frame 1 the result looks like this :-
enter image description here

    


    At about 0.5 seconds through, the result looks like this :-
enter image description here

    


    At the end of the sequence, the zoom animation webm fills that darker square you see (50 x 50 pixels).

    


    The code to do the blending of the two webm files looks like this :-

    


    filter_complex.extend([
        "[0:v][1:v]overlay=0:0:enable='between(t,0,2)'[out1];",
        '[out1]split[out1_1][out1_2];',
        '[out1_1]crop=50:50:231:251:exact=1,setsar=1[cropped1];',
        '[cropped1][2:v]blend=overlay[blended1];',
        "[out1_2][blended1]overlay=231:251:enable='between(t,0,2)'[out2]"
    ])


    


    This overlays a red background onto a white background, making the red background the new background colour.
It then splits the red background into two, so that there is one output for cropping and another output for overlaying.
It then crops the location and size of the layer to be blended out of the red background. We do this because blending works only on an asset of the same size.
It then performs the blend of the zoom animation onto the cropped background.
It then overlays the blended over the red background

    


    Unfortunately I'm unable to attach videos in stackoverflow, otherwise I would have included them.

    


    The full command looks like this :-

    


    ffmpeg -i v1_background.webm -itsoffset 0 -c:v libvpx-vp9 -i v2_red.webm -itsoffset 0 -c:v libvpx-vp9 -i v3_zoom.webm -filter_complex  "[0:v][1:v]overlay=0:0[out1];[out1]split[out1_1][out1_2];[out1_1]crop=50:50:231:251:exact=1,setsar=1[cropped1];[cropped1][2:v]blend=overlay[blended1];[out1_2][blended1]overlay=231:251"  output_video_with_blended_overlaid_asset.mp4


    


    I have checked the input vp9 webm zoom video file by extracting the first frame of the video

    


    ffmpeg -vcodec libvpx-vp9 -i zoom.webm first_frame.png


    


    and inspecting the colours in all the channels in GIMP. The colours (apart from the opaque pixel in the middle) are all zero, including the alpha channel.

    


    Note that I tried adding in all_mode, so that the blend command is blend=all_mode=overlay, however this still shows the darker placeholder under the animation asset. In other words, this command

    


    ffmpeg -i v1_background.webm -itsoffset 0 -c:v libvpx-vp9 -i v2_red.webm -itsoffset 0 -c:v libvpx-vp9 -i v3_zoom.webm -filter_complex  "[0:v][1:v]overlay=0:0[out1];[out1]split[out1_1][out1_2];[out1_1]crop=50:50:231:251:exact=1,setsar=1[cropped1];[cropped1][2:v]blend=all_mode=overlay[blended1];[out1_2][blended1]overlay=231:251"  output_video_with_blended_all_mode_overlay_asset.mp4


    


    also doesn't work

    


    and trying to convert the formats to rgba first doesn't help either, command below is simplified a bit

    


    ffmpeg -c:v libvpx-vp9 -i v2_red.webm -c:v libvpx-vp9 -i v3_zoom.webm -filter_complex "[0:v]format=pix_fmts=rgba[out1];[1:v]format=pix_fmts=rgba[out2];[out1]split[out1_1][out1_2];[out1_1]crop=50:50:0:0:exact=1,setsar=1[cropped1];[cropped1][out2]blend=all_mode=dodge[blended1];[out1_2][blended1]overlay=50:50" output_video_with_blended_all_mode_dodge_rgba_and_alpha_premultiplied_overlay.mp4


    


    adding in an alpha premultiply didn't help either

    


    ffmpeg -c:v libvpx-vp9 -i v2_red.webm -c:v libvpx-vp9 -i v3_zoom.webm -filter_complex "[0:v]format=pix_fmts=rgba[out1];[1:v]setsar=1,format=pix_fmts=rgba,geq=r='r(X,Y)*alpha(X,Y)/255':g='g(X,Y)*alpha(X,Y)/255':b='b(X,Y)*alpha(X,Y)/255'[out2];[out1]split[out1_1][out1_2];[out1_1]crop=50:50:0:0:exact=1,setsar=1[cropped1];[cropped1][out2]blend=all_mode=dodge[blended1];[out1_2][blended1]overlay=50:50" output_video_with_blended_all_mode_dodge_rgba_and_alpha_premultiplied_overlay.mp4


    


    Wondering if there is a workaround I could use so that the background stays transparent ?
I was looking for maybe a way of changing the input pixel format in the filter_complex stream to see if that works, but couldn't see anything about this.

    


  • Hareware accelerated HEVC encode/decode in python on Intel x64 with Nvidia GPU [closed]

    24 mars 2021, par Jason M

    I am working on a python project of decoding a 4k@25fps hevc stream from a IP camera, processing it, croping some region-of-interest and encoding each into a new stream. The program runs on a Intel i7-10700 PC with NVidia Gtx1080. I see that opencv offers Intel MediaSDK support and cuda codec, while Nvidia offers PyNvCodec.

    


    What may be my best combination of hardware-accelerated encoding/decoding options ? A demo project would be nicer.