Recherche avancée

Médias (1)

Mot : - Tags -/censure

Autres articles (77)

  • Formulaire personnalisable

    21 juin 2013, par

    Cette page présente les champs disponibles dans le formulaire de publication d’un média et il indique les différents champs qu’on peut ajouter. Formulaire de création d’un Media
    Dans le cas d’un document de type média, les champs proposés par défaut sont : Texte Activer/Désactiver le forum ( on peut désactiver l’invite au commentaire pour chaque article ) Licence Ajout/suppression d’auteurs Tags
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire. (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (6560)

  • FFmpeg - MJPEG decoding - getting different values

    27 décembre 2016, par ahmadh

    I have a set of JPEG frames which I am muxing into an avi, which gives me a mjpeg video. This is the command I run on the console :

    ffmpeg -y -start_number 0 -i %06d.JPEG -codec copy vid.avi

    When I try to demux the video using ffmpeg C api, I get frames which are slightly different in values. Demuxing code looks something like this :

    AVFormatContext* fmt_ctx = NULL;
    AVCodecContext* cdc_ctx = NULL;
    AVCodec* vid_cdc = NULL;
    int ret;
    unsigned int height, width;

    ....
    // read_nframes is the number of frames to read
    output_arr = new unsigned char [height * width * 3 *
                                   sizeof(unsigned char) * read_nframes];

    avcodec_open2(cdc_ctx, vid_cdc, NULL);

    int num_bytes;
    uint8_t* buffer = NULL;
    const AVPixelFormat out_format = AV_PIX_FMT_RGB24;

    num_bytes = av_image_get_buffer_size(out_format, width, height, 1);
    buffer = (uint8_t*)av_malloc(num_bytes * sizeof(uint8_t));

    AVFrame* vid_frame = NULL;
    vid_frame = av_frame_alloc();
    AVFrame* conv_frame = NULL;
    conv_frame = av_frame_alloc();

    av_image_fill_arrays(conv_frame->data, conv_frame->linesize, buffer,
                        out_format, width, height, 1);

    struct SwsContext *sws_ctx = NULL;
    sws_ctx = sws_getContext(width, height, cdc_ctx->pix_fmt,
                            width, height, out_format,
                            SWS_BILINEAR, NULL,NULL,NULL);

    int frame_num = 0;
    AVPacket vid_pckt;
    while (av_read_frame(fmt_ctx, &vid_pckt) >=0) {
       ret = avcodec_send_packet(cdc_ctx, &vid_pckt);
       if (ret < 0)
           break;

       ret = avcodec_receive_frame(cdc_ctx, vid_frame);
       if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)
           break;
       if (ret >= 0) {
           // convert image from native format to planar GBR
           sws_scale(sws_ctx, vid_frame->data,
                     vid_frame->linesize, 0, vid_frame->height,
                     conv_frame->data, conv_frame->linesize);

           unsigned char* r_ptr = output_arr +
               (height * width * sizeof(unsigned char) * 3 * frame_num);
           unsigned char* g_ptr = r_ptr + (height * width * sizeof(unsigned char));
           unsigned char* b_ptr = g_ptr + (height * width * sizeof(unsigned char));
           unsigned int pxl_i = 0;

           for (unsigned int r = 0; r < height; ++r) {
               uint8_t* avframe_r = conv_frame->data[0] + r*conv_frame->linesize[0];
               for (unsigned int c = 0; c < width; ++c) {
                   r_ptr[pxl_i] = avframe_r[0];
                   g_ptr[pxl_i]   = avframe_r[1];
                   b_ptr[pxl_i]   = avframe_r[2];
                   avframe_r += 3;
                   ++pxl_i;
               }
           }

           ++frame_num;

           if (frame_num >= read_nframes)
               break;
       }
    }

    ...

    In my experience around two-thirds of the pixel values are different, each by +-1 (in a range of [0,255]). I am wondering is it due to some decoding scheme FFmpeg uses for reading JPEG frames ? I tried encoding and decoding png frames, and it works perfectly fine.

    In short my goal is to get the same pixel by pixel values for each JPEG frame as I would I have gotten if I was reading the JPEG images directly. Here is the stand-alone code I used. It includes cmake files to build code, and a couple of jpeg frames with the converted avi file to test this problem. (give —filetype png to test the png decoding).

  • Save frame as image during video stream using ffmpeg, in C

    30 avril 2017, par George T.

    I’m using the Parrot SDK3 to retrieve the video stream from a Bebop2 drone. From it I need to "extract" a frame and save it as an image.

    The whole code can be found here, but I shall try to include the (what I understand as) important parts here.

    Where the video is sent to mplayer. I assume from this that the video format is h624.

    if (videoOut != NULL)
    {
       if (codec.type == ARCONTROLLER_STREAM_CODEC_TYPE_H264)
       {
           if (DISPLAY_WITH_MPLAYER)
           {
               fwrite(codec.parameters.h264parameters.spsBuffer, codec.parameters.h264parameters.spsSize, 1, videoOut);
               fwrite(codec.parameters.h264parameters.ppsBuffer, codec.parameters.h264parameters.ppsSize, 1, videoOut);

               fflush (videoOut);
           }
       }
    }

    Where the frame is received and written to videoOut, to be displayed

    eARCONTROLLER_ERROR didReceiveFrameCallback (ARCONTROLLER_Frame_t *frame, void *customData)
    {
       if (videoOut != NULL)
       {
           if (frame != NULL)
           {
               if (DISPLAY_WITH_MPLAYER)
               {
                   fwrite(frame->data, frame->used, 1, videoOut);

                   fflush (videoOut);
               }
           }
       }
       return ARCONTROLLER_OK;
    }

    The idea I had in mind is to fwrite(frame->data, frame->used, 1, videoOut); to a different file but that just creates a corrupted file, probably because of encoding. In what way can I get this frame and store it to a separate image ? The preferable image filetype .png, .jpg or .gif

    Any help will be greatly appreciated ! Thanks in advance !

  • Can I make a seek accurate video player with ffmpeg ?

    5 septembre 2016, par Will Tower

    Simple question : I would like to make a video player with "fairly" accurate seeking and need to know if I can do that with ffmpeg. I think the answer is "Well, yeah of course you can" but want to get a reality check before I waste lots of time only to discover that I can’t.

    I am coding in Actionscript for a cross platform desktop AIR app. I am using a NativeProcess to run ffmpeg in the background and using Actionscript’s NetConnection and NetStream classes to connect the ffmpeg stream to a Video class. Using ffmpeg with a NativeProcess is essentially running ffmpeg from the command line. I have basic AS3 code roughed out to load and play a video through ffmeg and I am going through the dranger tutorial (How to Write a Video Player in
    Less Than 1000 Lines
    ) now to sort out the rest of it.

    Would appreciate hearing of any caveats or cautions.

    Update : So I have been fiddling with this all afternoon but no success yet. I am able to run a video with the following commands (NativeProcess code removed for clarity) :

    ffmpegArgs = new Vector.<string>();
    ffmpegArgs.push("-re", "-i",videoPath,"-c:a", "copy", "-c:v", "copy","-f", "flv", "-");
    </string>

    But I am confused about how to interact with ffmpeg once it is running. I’ve tried a bunch of variations on this scenario : using writeUTFBytes to send commands to the ffmpeg NativeProcess instance without a good result :

    var cmd:String = "-ss, 50, -re, -i," + videoPath + ",-c:a, copy, -c:v, copy, -f flv -";
    ffmpegProcess.standardInput.writeUTFBytes(cmd);

    Do I need to tear down and respawn the ffmpeg NativeProcess instance in order to access different sections of a video file ?


    ss position (input/output)

    When used as an input option (before -i), seeks in this input file to position. Note that in most formats it is not possible to seek
    exactly, so ffmpeg will seek to the closest seek point before
    position. When transcoding and -accurate_seek is enabled (the
    default), this extra segment between the seek point and position will
    be decoded and discarded. When doing stream copy or when
    -noaccurate_seek is used, it will be preserved.

    When used as an output option (before an output filename), decodes but discards input until the timestamps reach position.

    https://ffmpeg.org/ffmpeg.html#Video-Options