Recherche avancée

Médias (1)

Mot : - Tags -/ogg

Autres articles (63)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (7927)

  • How to extract 16-bit PNG frame from lossless x264 video

    29 mai 2017, par whiskeyspider

    I encoded a series of 16-bit grayscale PNGs to a lossless video with the following command :

    ffmpeg -i image%04d.png -crf 0 -c:v libx264 -preset veryslow output.mp4

    I am now trying to verify that the conversion to video was truly lossless by pulling out the PNGs at the same quality. The command I’m using :

    ffmpeg -i output.mp4 image%04d.png

    However, this is outputting 8-bit PNGs. I’ve tried various options I’ve read about such as -vcodec png and -qscale 0 but so far nothing appears to make it output 16-bit PNGs.

    How do I extract all frames from the video at the same quality as they were going in ? Or did I make a mistake in creating the lossless video in the first place ?

    Edit : I get this error message when trying to use -pix_fmt gray16be.

    [swscaler @ 0x7fef1a8f0800] deprecated pixel format used, make sure
    you did set range correctly

    Full output :

    ffmpeg -i output.mp4 -pix_fmt gray16be  image%04d.png
    ffmpeg version 3.3.1 Copyright (c) 2000-2017 the FFmpeg developers
     built with Apple LLVM version 8.0.0 (clang-800.0.42.1)
     configuration: --prefix=/usr/local/Cellar/ffmpeg/3.3.1 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libmp3lame --enable-libx264 --enable-libxvid --enable-opencl --disable-lzma --enable-vda
     libavutil      55. 58.100 / 55. 58.100
     libavcodec     57. 89.100 / 57. 89.100
     libavformat    57. 71.100 / 57. 71.100
     libavdevice    57.  6.100 / 57.  6.100
     libavfilter     6. 82.100 /  6. 82.100
     libavresample   3.  5.  0 /  3.  5.  0
     libswscale      4.  6.100 /  4.  6.100
     libswresample   2.  7.100 /  2.  7.100
     libpostproc    54.  5.100 / 54.  5.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'output.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf57.71.100
     Duration: 00:00:09.76, start: 0.000000, bitrate: 1337 kb/s
       Stream #0:0(und): Video: h264 (High 4:4:4 Predictive) (avc1 / 0x31637661), yuvj444p(pc), 512x512, 1336 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
       Metadata:
         handler_name    : VideoHandler
    Stream mapping:
     Stream #0:0 -> #0:0 (h264 (native) -> png (native))
    Press [q] to stop, [?] for help
    [swscaler @ 0x7fef1a8f0800] deprecated pixel format used, make sure you did set range correctly
    Output #0, image2, to 'image%04d.png':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf57.71.100
       Stream #0:0(und): Video: png, gray16be, 512x512, q=2-31, 200 kb/s, 25 fps, 25 tbn, 25 tbc (default)
       Metadata:
         handler_name    : VideoHandler
         encoder         : Lavc57.89.100 png
    frame=  244 fps=0.0 q=-0.0 Lsize=N/A time=00:00:09.76 bitrate=N/A speed=  21x    
    video:4038kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown

    I’m happy to use a non-ffmpeg solution if there is one.

  • How to extract 16-bit PNG frame from lossless x264 video

    29 mai 2017, par whiskeyspider

    I encoded a series of 16-bit grayscale PNGs to a lossless video with the following command :

    ffmpeg -i image%04d.png -crf 0 -c:v libx264 -preset veryslow output.mp4

    I am now trying to verify that the conversion to video was truly lossless by pulling out the PNGs at the same quality. The command I’m using :

    ffmpeg -i output.mp4 image%04d.png

    However, this is outputting 8-bit PNGs. I’ve tried various options I’ve read about such as -vcodec png and -qscale 0 but so far nothing appears to make it output 16-bit PNGs.

    How do I extract all frames from the video at the same quality as they were going in ? Or did I make a mistake in creating the lossless video in the first place ?

    Edit : I get this error message when trying to use -pix_fmt gray16be.

    [swscaler @ 0x7fef1a8f0800] deprecated pixel format used, make sure
    you did set range correctly

    Full output :

    ffmpeg -i output.mp4 -pix_fmt gray16be  image%04d.png
    ffmpeg version 3.3.1 Copyright (c) 2000-2017 the FFmpeg developers
     built with Apple LLVM version 8.0.0 (clang-800.0.42.1)
     configuration: --prefix=/usr/local/Cellar/ffmpeg/3.3.1 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libmp3lame --enable-libx264 --enable-libxvid --enable-opencl --disable-lzma --enable-vda
     libavutil      55. 58.100 / 55. 58.100
     libavcodec     57. 89.100 / 57. 89.100
     libavformat    57. 71.100 / 57. 71.100
     libavdevice    57.  6.100 / 57.  6.100
     libavfilter     6. 82.100 /  6. 82.100
     libavresample   3.  5.  0 /  3.  5.  0
     libswscale      4.  6.100 /  4.  6.100
     libswresample   2.  7.100 /  2.  7.100
     libpostproc    54.  5.100 / 54.  5.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'output.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf57.71.100
     Duration: 00:00:09.76, start: 0.000000, bitrate: 1337 kb/s
       Stream #0:0(und): Video: h264 (High 4:4:4 Predictive) (avc1 / 0x31637661), yuvj444p(pc), 512x512, 1336 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
       Metadata:
         handler_name    : VideoHandler
    Stream mapping:
     Stream #0:0 -> #0:0 (h264 (native) -> png (native))
    Press [q] to stop, [?] for help
    [swscaler @ 0x7fef1a8f0800] deprecated pixel format used, make sure you did set range correctly
    Output #0, image2, to 'image%04d.png':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf57.71.100
       Stream #0:0(und): Video: png, gray16be, 512x512, q=2-31, 200 kb/s, 25 fps, 25 tbn, 25 tbc (default)
       Metadata:
         handler_name    : VideoHandler
         encoder         : Lavc57.89.100 png
    frame=  244 fps=0.0 q=-0.0 Lsize=N/A time=00:00:09.76 bitrate=N/A speed=  21x    
    video:4038kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown

    I’m happy to use a non-ffmpeg solution if there is one.

  • Increase Duration of a video FFMPEG C++

    9 avril 2015, par Shahroz Tariq

    I am using the code from the samples of FFmpeg which encodes a picture into a video. All I want to do is to give it a series of pictures and it gives me a video with each picture is taking one second`Code below is just taking one picture from my file system & creating video from it

    AVCodec *codec;
    AVCodecContext *c = NULL;
    int i, ret, x, y, got_output;
    FILE *f;

    AVPacket pkt;
    uint8_t endcode[] = { 0, 0, 1, 0xb7 };

    printf("Encode video file %s\n", filename);

    /* find the mpeg1 video encoder */
    codec = avcodec_find_encoder((AVCodecID)codec_id);
    if (!codec)
    {
       fprintf(stderr, "Codec not found\n");
       exit(1);
    }

    c = avcodec_alloc_context3(codec);
    if (!c)
    {
       fprintf(stderr, "Could not allocate video codec context\n");
       exit(1);
    }

    /* put sample parameters */
    c->bit_rate = 400000;
    /* resolution must be a multiple of two */
    c->width = 200;
    c->height = 200;
    /* frames per second */
    AVRational rational;
    rational.num = 1;
    rational.den = 25;
    c->time_base = rational;
    /* emit one intra frame every ten frames
    * check frame pict_type before passing frame
    * to encoder, if frame->pict_type is AV_PICTURE_TYPE_I
    * then gop_size is ignored and the output of encoder
    * will always be I frame irrespective to gop_size
    */
    c->gop_size = 10;
    c->max_b_frames = 1;
    c->pix_fmt = AV_PIX_FMT_YUV420P;

    if (codec_id == AV_CODEC_ID_H264)
       av_opt_set(c->priv_data, "preset", "slow", 0);

    /* open it */
    if (avcodec_open2(c, codec, NULL) < 0)
    {
       fprintf(stderr, "Could not open codec\n");
       exit(1);
    }

    fopen_s(&f, filename, "wb");
    if (!f)
    {
       fprintf(stderr, "Could not open %s\n", filename);
       exit(1);
    }
    AVFrame *frame = OpenImage("..\\..\\..\\..\\..\\..\\1.jpg");
    //frame = av_frame_alloc();
    if (!frame)
    {
       fprintf(stderr, "Could not allocate video frame\n");
       exit(1);
    }

    frame->format = c->pix_fmt;
    frame->width = c->width;
    frame->height = c->height;
    /* the image can be allocated by any means and av_image_alloc() is
    * just the most convenient way if av_malloc() is to be used */

    int screenHeight = 200;
    int screenWidth = 200;
    for (i = 0; i < 25; i++)
    {
       av_init_packet(&pkt);
       pkt.data = NULL;    // packet data will be allocated by the encoder
       pkt.size = 0;

       fflush(stdout);



       frame->pts = i;

       /* encode the image */
       ret = avcodec_encode_video2(c, &pkt, frame, &got_output);
       if (ret < 0)
       {
           fprintf(stderr, "Error encoding frame\n");
           exit(1);
       }

       if (got_output)
       {
           printf("Write frame %3d (size=%5d)\n", i, pkt.size);
           fwrite(pkt.data, 1, pkt.size, f);
           av_free_packet(&pkt);
       }
    }

    /* get the delayed frames */
    for (got_output = 1; got_output; i++)
    {
       fflush(stdout);

       ret = avcodec_encode_video2(c, &pkt, NULL, &got_output);
       if (ret < 0)
       {
           fprintf(stderr, "Error encoding frame\n");
           exit(1);
       }

       if (got_output)
       {
           printf("Write frame %3d (size=%5d)\n", i, pkt.size);
           fwrite(pkt.data, 1, pkt.size, f);
           av_free_packet(&pkt);
       }
    }

    /* add sequence end code to have a real mpeg file */
    fwrite(endcode, 1, sizeof(endcode), f);
    fclose(f);

    avcodec_close(c);
    av_free(c);
    av_freep(&frame->data[0]);
    av_frame_free(&frame);
    printf("\n");`