Recherche avancée

Médias (1)

Mot : - Tags -/wave

Autres articles (19)

  • Création définitive du canal

    12 mars 2010, par

    Lorsque votre demande est validée, vous pouvez alors procéder à la création proprement dite du canal. Chaque canal est un site à part entière placé sous votre responsabilité. Les administrateurs de la plateforme n’y ont aucun accès.
    A la validation, vous recevez un email vous invitant donc à créer votre canal.
    Pour ce faire il vous suffit de vous rendre à son adresse, dans notre exemple "http://votre_sous_domaine.mediaspip.net".
    A ce moment là un mot de passe vous est demandé, il vous suffit d’y (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Taille des images et des logos définissables

    9 février 2011, par

    Dans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
    Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...)

Sur d’autres sites (3369)

  • Can FFmpeg crop gif image under 1 second ?

    5 septembre 2021, par jeongbalmountain

    I am developing an iOS application about gif.

    


    But I have a problem with gif cropping. Cropping gif spends so many times with FFmpeg.

    


    (updated code)

    


    These are the ffmpeg codes I've tried.

    


    cropped images -> gif

    


    ffmpeg -f image2 -i %d.jpg -vf 'scale=450:-1' output.gif


    


    using concat command (cropped images -> gif)

    


    ffmpeg -f concat -i tmp.txt output.gif


    


    using crop filter

    


    ffmpeg -i in.gif -filter:v "crop=out_w:out_h:x:y" out.gif


    


    These codes are so slow when converting images to gif
or gif to cropped gif.

    


    Is there a way to reduce the time to crop gif less than 1 second ?

    


    (updated2)
ffmpeg -i in.gif -filter:v "crop=out_w:out_h:x:y" out.gif

    


    


    ffmpeg version 4.4 Copyright (c) 2000-2021 the FFmpeg developers
    
built with Apple clang version 12.0.5 (clang-1205.0.22.9)
    
configuration : —prefix=/usr/local/Cellar/ffmpeg/4.4_2 —enable-shared

    


    


    


    —enable-pthreads —enable-version3 —cc=clang —host-cflags= —host-ldflags= —enable-ffplay —enable-gnutls —enable-gpl —enable-libaom —enable-libbluray —enable-libdav1d —enable-libmp3lame —enable-libopus —enable-librav1e —enable-librubberband —enable-libsnappy —enable-libsrt —enable-libtesseract —enable-libtheora —enable-libvidstab —enable-libvorbis —enable-libvpx —enable-libwebp —enable-libx264 —enable-libx265 —enable-libxml2 —enable-libxvid —enable-lzma —enable-libfontconfig —enable-libfreetype —enable-frei0r —enable-libass —enable-libopencore-amrnb —enable-libopencore-amrwb —enable-libopenjpeg —enable-libspeex —enable-libsoxr —enable-libzmq —enable-libzimg —disable-libjack —disable-indev=jack —enable-avresample —enable-videotoolbox libavutil 56. 70.100 / 56. 70.100 libavcodec 58.134.100 /
58.134.100 libavformat 58. 76.100 / 58. 76.100 libavdevice 58. 13.100 / 58. 13.100 libavfilter 7.110.100 / 7.110.100 libavresample 4. 0. 0 / 4. 0. 0 libswscale 5. 9.100 /
5. 9.100 libswresample 3. 9.100 / 3. 9.100 libpostproc 55. 9.100 / 55. 9.100 Input #0, gif, from './test.gif' : Duration : 00:00:18.27, start : 0.000000, bitrate : 70098 kb/s Stream #0:0 :
Video : gif, bgra, 1080x1920, 15 fps, 14.99 tbr, 100 tbn, 100 tbc
Stream mapping : Stream #0:0 -> #0:0 (gif (native) -> gif (native))
Press [q] to stop, [?] for help Output #0, gif, to './crop.gif' :
    
Metadata :
encoder : Lavf58.76.100 Stream #0:0 : Video : gif, bgr8(pc, progressive), 1080x1198, q=2-31, 200 kb/s, 14.99 fps, 100 tbn
Metadata :
encoder : Lavc58.134.100 gif frame= 1 fps=0.0 q=-0.0 size= 0kB time=00:00:00.01 bitrate= 0.0kbits/s speed=5e+03x

    


    


    


    frame= 11 fps=0.0 q=-0.0 size= 4096kB time=00:00:00.68
bitrate=49344.8kbits/s speed=1.31x frame= 22 fps= 21 q=-0.0
size= 8192kB time=00:00:01.41 bitrate=47594.9kbits/s speed=1.32x
    
frame= 32 fps= 20 q=-0.0 size= 12288kB time=00:00:02.08
bitrate=48395.8kbits/s speed=1.32x frame= 43 fps= 20 q=-0.0
size= 16384kB time=00:00:02.81 bitrate=47764.3kbits/s speed=1.33x
    
frame= 54 fps= 20 q=-0.0 size= 20992kB time=00:00:03.55
bitrate=48441.3kbits/s speed=1.33x frame= 65 fps= 20 q=-0.0
size= 24832kB time=00:00:04.28 bitrate=47528.9kbits/s speed=1.35x
    
frame= 77 fps= 21 q=-0.0 size= 28928kB time=00:00:05.08
bitrate=46649.2kbits/s speed=1.36x frame= 87 fps= 21 q=-0.0
size= 32768kB time=00:00:05.75 bitrate=46684.4kbits/s speed=1.36x
    
frame= 98 fps= 21 q=-0.0 size= 36608kB time=00:00:06.48
bitrate=46279.7kbits/s speed=1.36x frame= 109 fps= 21 q=-0.0
size= 40704kB time=00:00:07.22 bitrate=46183.8kbits/s speed=1.37x
    
frame= 120 fps= 21 q=-0.0 size= 44800kB time=00:00:07.95
bitrate=46163.7kbits/s speed=1.37x frame= 131 fps= 21 q=-0.0
size= 49152kB time=00:00:08.69 bitrate=46335.2kbits/s speed=1.38x
    
frame= 142 fps= 21 q=-0.0 size= 53504kB time=00:00:09.42
bitrate=46529.2kbits/s speed=1.37x frame= 153 fps= 21 q=-0.0
size= 57344kB time=00:00:10.15 bitrate=46282.0kbits/s speed=1.38x
    
frame= 164 fps= 21 q=-0.0 size= 61184kB time=00:00:10.89
bitrate=46025.6kbits/s speed=1.38x frame= 174 fps= 21 q=-0.0
size= 65280kB time=00:00:11.55 bitrate=46300.8kbits/s speed=1.37x
    
frame= 185 fps= 21 q=-0.0 size= 69376kB time=00:00:12.29
bitrate=46243.1kbits/s speed=1.38x frame= 196 fps= 21 q=-0.0
size= 72960kB time=00:00:13.02 bitrate=45905.4kbits/s speed=1.38x
    
frame= 207 fps= 21 q=-0.0 size= 76544kB time=00:00:13.76
bitrate=45570.4kbits/s speed=1.38x frame= 218 fps= 21 q=-0.0
size= 80640kB time=00:00:14.49 bitrate=45590.3kbits/s speed=1.38x
    
frame= 229 fps= 21 q=-0.0 size= 84224kB time=00:00:15.23
bitrate=45302.9kbits/s speed=1.39x frame= 240 fps= 21 q=-0.0
size= 88064kB time=00:00:15.96 bitrate=45201.8kbits/s speed=1.39x
    
frame= 250 fps= 21 q=-0.0 size= 91648kB time=00:00:16.63
bitrate=45146.1kbits/s speed=1.39x frame= 261 fps= 21 q=-0.0
size= 96000kB time=00:00:17.36 bitrate=45301.4kbits/s speed=1.38x
    
frame= 271 fps= 21 q=-0.0 size= 99840kB time=00:00:18.03
bitrate=45362.7kbits/s speed=1.38x frame= 274 fps= 21 q=-0.0
Lsize= 101552kB time=00:00:18.23 bitrate=45634.2kbits/s speed=1.37x
    
video:101552kB audio:0kB subtitle:0kB other streams:0kB global
headers:0kB muxing overhead : 0.000019%

    


    


    it takes 12 seconds

    


  • Reasons for "Segmentation fault (core dumped)" when using Python extension and FFmpeg

    24 août 2021, par Christian Vorhemus

    I want to write a Python C extension that includes a function convertVideo() that converts a video from one format to another making use of FFmpeg 3.4.8 (the libav* libaries). The code of the extension is at the end of the question. The extension compiles successfully but whenever I open Python and want to call it (using a simple Python wrapper code that I don't include here), I get

    


    Python 3.7.10 (default, May  2 2021, 18:28:10)
[GCC 9.1.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import myModule
>>> myModule.convert_video("/home/admin/video.h264", "/home/admin/video.mp4")
convert 0
convert 1
Format raw H.264 video, duration -9223372036854775808 us
convert 2
Segmentation fault (core dumped)


    


    The interesting thing is, I wrote a simple helper program test_convert.cc that calls convertVideo() like so

    


    #include 
#include 

int convertVideo(const char *in_filename, const char *out_filename);

int main() {
  int res = convertVideo("/home/admin/video.h264", "/home/admin/video.mp4");
  return 0;
}


    


    and I compiled this program making use of the shared library that Python generates when building the C extension like so

    


    gcc test_convert.cc /usr/lib/python3.7/site-packages/_myModule.cpython-37m-aarch64-linux-gnu.so -o test_convert


    


    And it works ! The output is

    


    root# ./test_convert
convert 0
convert 1
Format raw H.264 video, duration -9223372036854775808 us
convert 2
convert 3
convert 4
convert 5
convert 6
Output #0, mp4, to '/home/admin/video.mp4':
    Stream #0:0: Video: h264 (High), yuv420p(tv, bt470bg, progressive), 1280x720 [SAR 1:1 DAR 16:9], q=2-31
convert 7


    


    The extension code looks like this

    


    #include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 

extern "C"
{
#include "libavformat/avformat.h"
#include "libavutil/imgutils.h"
}

int convertVideo(const char *in_filename, const char *out_filename)
{
  // Input AVFormatContext and Output AVFormatContext
  AVFormatContext *input_format_context = avformat_alloc_context();
  AVPacket pkt;

  int ret, i;
  int frame_index = 0;
  printf("convert 0\n");
  av_register_all();
  printf("convert 1\n");
  // Input
  if ((ret = avformat_open_input(&input_format_context, in_filename, NULL,
                                 NULL)) < 0)
  {
    printf("Could not open input file.");
    return 1;
  }
  else
  {
    printf("Format %s, duration %lld us\n",
           input_format_context->iformat->long_name,
           input_format_context->duration);
  }
  printf("convert 2\n");
  if ((ret = avformat_find_stream_info(input_format_context, 0)) < 0)
  {
    printf("Failed to retrieve input stream information");
    return 1;
  }
  printf("convert 3\n");
  AVFormatContext *output_format_context = avformat_alloc_context();
  AVPacket packet;
  int stream_index = 0;
  int *streams_list = NULL;
  int number_of_streams = 0;
  int fragmented_mp4_options = 0;
  printf("convert 4\n");
  avformat_alloc_output_context2(&output_format_context, NULL, NULL,
                                 out_filename);
  if (!output_format_context)
  {
    fprintf(stderr, "Could not create output context\n");
    ret = AVERROR_UNKNOWN;
    return 1;
  }
  printf("convert 5\n");
  AVOutputFormat *fmt = av_guess_format(0, out_filename, 0);
  output_format_context->oformat = fmt;

  number_of_streams = input_format_context->nb_streams;
  streams_list =
      (int *)av_mallocz_array(number_of_streams, sizeof(*streams_list));

  if (!streams_list)
  {
    ret = AVERROR(ENOMEM);
    return 1;
  }
  for (i = 0; i < input_format_context->nb_streams; i++)
  {
    AVStream *out_stream;
    AVStream *in_stream = input_format_context->streams[i];
    AVCodecParameters *in_codecpar = in_stream->codecpar;
    if (in_codecpar->codec_type != AVMEDIA_TYPE_AUDIO &&
        in_codecpar->codec_type != AVMEDIA_TYPE_VIDEO &&
        in_codecpar->codec_type != AVMEDIA_TYPE_SUBTITLE)
    {
      streams_list[i] = -1;
      continue;
    }
    streams_list[i] = stream_index++;

    out_stream = avformat_new_stream(output_format_context, NULL);
    if (!out_stream)
    {
      fprintf(stderr, "Failed allocating output stream\n");
      ret = AVERROR_UNKNOWN;
      return 1;
    }
    ret = avcodec_parameters_copy(out_stream->codecpar, in_codecpar);
    if (ret < 0)
    {
      fprintf(stderr, "Failed to copy codec parameters\n");
      return 1;
    }
  }
  printf("convert 6\n");
  av_dump_format(output_format_context, 0, out_filename, 1);
  if (!(output_format_context->oformat->flags & AVFMT_NOFILE))
  {
    ret = avio_open(&output_format_context->pb, out_filename, AVIO_FLAG_WRITE);
    if (ret < 0)
    {
      fprintf(stderr, "Could not open output file '%s'", out_filename);
      return 1;
    }
  }
  AVDictionary *opts = NULL;
  printf("convert 7\n");
  ret = avformat_write_header(output_format_context, &opts);
  if (ret < 0)
  {
    fprintf(stderr, "Error occurred when opening output file\n");
    return 1;
  }
  int n = 0;

  while (1)
  {
    AVStream *in_stream, *out_stream;
    ret = av_read_frame(input_format_context, &packet);
    if (ret < 0)
      break;
    in_stream = input_format_context->streams[packet.stream_index];
    if (packet.stream_index >= number_of_streams ||
        streams_list[packet.stream_index] < 0)
    {
      av_packet_unref(&packet);
      continue;
    }
    packet.stream_index = streams_list[packet.stream_index];

    out_stream = output_format_context->streams[packet.stream_index];

    out_stream->codec->time_base.num = 1;
    out_stream->codec->time_base.den = 30;

    packet.pts = n * 3000;
    packet.dts = n * 3000;
    packet.duration = 3000;

    packet.pos = -1;

    ret = av_interleaved_write_frame(output_format_context, &packet);
    if (ret < 0)
    {
      fprintf(stderr, "Error muxing packet\n");
      break;
    }
    av_packet_unref(&packet);
    n++;
  }

  av_write_trailer(output_format_context);
  avformat_close_input(&input_format_context);
  if (output_format_context &&
      !(output_format_context->oformat->flags & AVFMT_NOFILE))
    avio_closep(&output_format_context->pb);
  avformat_free_context(output_format_context);
  av_freep(&streams_list);
  if (ret < 0 && ret != AVERROR_EOF)
  {
    fprintf(stderr, "Error occurred\n");
    return 1;
  }
  return 0;
}
// PyMethodDef and other orchestration code is skipped


    


    What is the reason that the code works as expected in my test_convert but not within Python ?

    


  • FFMPEG encoding 16bit video data results in 10bit

    12 mars 2023, par Jl arto

    I want to compress a depth map that has 16 bits of information per pixel. In general, such depth maps can be stored in different ways, e.g. p016le, gray16le, yuv420p16le, yuv444p16le, ... but for simplicity, let's assume the depth map is a yuv420p16le (where the y-channel contains the depth).

    


    For some reason when encoding with hevc_nvenc (I use an NVIDIA GTX 1660 Ti GPU), ffmpeg (the command line tool) changes the pixel format to a 10 or 12 bit variant (p010le, gray12le, yuv420p10le, yuv444p12le, ...), but I would like to keep the full 16 bits, since this affects the quality of the depth stored.

    


    For example :

    


    ffmpeg.exe -s:v 1920x1080 -r 30 -pix_fmt yuv420p16le -i depth_yuv420p16le.yuv -c:v hevc_nvenc -pix_fmt yuv444p16le output.mp4


    


    If I use ffprobe on the output.mp4, it tells me that the underlying pixel format is actually yuv444p10le. (Decoding and looking at the raw pixel data, I can confirm that the precision has decreased from 16 bits to 10 bits).

    


    I hope 16 bit compression is possible, since according to

    


    ffmpeg -h encoder=hevc_nvenc


    


    the supported pixel formats are :

    


    hevc_nvenc: yuv420p nv12 p010le yuv444p p016le yuv444p16le bgr0 rgb0 cuda d3d11


    


    But p016le results in a p010le output, and yuv444p16le in yuv444p10le.

    


    Does anyone know where the problem could lie ? Should I re-install ffmpeg (version 4.3.2-2021-02-27-essentials_build-www.gyan.dev) ? Is it because of Windows 10 having limited encoding/decoding capabilities ? Will buying the HEVC Video Extensions help solve this problem ?

    


    Additional info : using libx256 does not look like it will work for this purpose, since the supported pixel formats are :

    


    libx256 : yuv420p yuvj420p yuv422p yuvj422p yuv444p yuvj444p gbrp yuv420p10le yuv422p10le yuv444p10le gbrp10le yuv420p12le yuv422p12le yuv444p12le gbrp12le gray gray10le gray12le


    


    Any help would be greatly appreciated.