Recherche avancée

Médias (91)

Autres articles (63)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

Sur d’autres sites (4967)

  • Use FFMPEG to put images separately inside a "box" and keeping their original positions of the X and Y boundaries and maybe modify their offsets

    14 septembre 2020, par karl-police

    I have a collection of images, for this example, I created 3 images that have different sizes, but the image itsself is the same, except it is moved more down or left or right.

    


    1 : 2 : 3 :

    


    These are the images. The total sizes of all images together is 35x39, so that means that they need to go inside an 35x39 image so it can later be used to craft them into a GIF as example. Since "crop" does not really work as it crops it smaller and can't make them bigger and I can't imagine it being the best solution for that anyway, perhaps.

    


    So this is the invisible 35x39 sized box.

    


    image :

    


    So what I'm trying to do is to figure out how I can put each of these images separately in the 35x39 sized box, but maintaining the original X position or the Y position or both, from the boundaries of the images. I'm trying to figure out how I can do this for other transparent images for similar things, mostly used to craft animations. Here into a GIF out of image collections, but the images need to be fixed first.

    


    I tried to look in the FFMPEG documentation, but there's many many filters and etc. that I had a few issues finding the right thing. I'm also not sure if it is also then possible to change the offset of the X and Y, because I think if there's something to make it keep the original position of X and Y, then I also think that there's probably something to change the offset of it aswell.

    


     

    


    End result of the images could basically be :

    


    1 : 2 : 3 :

    


    This end result example of the images, basically have their X aligned on the top and Y aligned on the left. I'm not sure if you can call it "original X position", because if I compare it to Photoshop's special paste and keep at original position, it puts the first image a bit more down as example, for some reason. So I just moved the X all the way to the top and the Y all the way to the left.

    


  • Functionallity (supposedly) of shutil.which("ffmpeg") with pyinstaller flag —windowed is use

    8 juin 2021, par vanilla

    I have a github project for my app Scout. I use pyinstaller to compile. I just added a feature that uses ffmpeg, and I use shutil.which() to see if it's installed in order to warn and disable the features using it.

    


    When I run it straight with python3 scout.py, or with virtually any other flags than--windowed, --no-console, it will return None even when I have ffmpeg. Before it returns /usr/bin/ffmpeg.

    


    Those flags make it so the app doesn't have a console in the background, not sure exactly what I can do about this. I have tried using some of these flags to solve the problem but pyinstaller doesn't seem to recognize them.

    


    I have also tried if os.system("ffmpeg -version") != 0:, but the same thing happens, it will return 0 for anything but the windowed version.

    


    Any help ?

    


  • Reasons for "Segmentation fault (core dumped)" when using Python extension and FFmpeg

    24 août 2021, par Christian Vorhemus

    I want to write a Python C extension that includes a function convertVideo() that converts a video from one format to another making use of FFmpeg 3.4.8 (the libav* libaries). The code of the extension is at the end of the question. The extension compiles successfully but whenever I open Python and want to call it (using a simple Python wrapper code that I don't include here), I get

    


    Python 3.7.10 (default, May  2 2021, 18:28:10)
[GCC 9.1.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import myModule
>>> myModule.convert_video("/home/admin/video.h264", "/home/admin/video.mp4")
convert 0
convert 1
Format raw H.264 video, duration -9223372036854775808 us
convert 2
Segmentation fault (core dumped)


    


    The interesting thing is, I wrote a simple helper program test_convert.cc that calls convertVideo() like so

    


    #include 
#include 

int convertVideo(const char *in_filename, const char *out_filename);

int main() {
  int res = convertVideo("/home/admin/video.h264", "/home/admin/video.mp4");
  return 0;
}


    


    and I compiled this program making use of the shared library that Python generates when building the C extension like so

    


    gcc test_convert.cc /usr/lib/python3.7/site-packages/_myModule.cpython-37m-aarch64-linux-gnu.so -o test_convert


    


    And it works ! The output is

    


    root# ./test_convert
convert 0
convert 1
Format raw H.264 video, duration -9223372036854775808 us
convert 2
convert 3
convert 4
convert 5
convert 6
Output #0, mp4, to '/home/admin/video.mp4':
    Stream #0:0: Video: h264 (High), yuv420p(tv, bt470bg, progressive), 1280x720 [SAR 1:1 DAR 16:9], q=2-31
convert 7


    


    The extension code looks like this

    


    #include 

#include 
#include 
#include 
#include 
#include 
#include 
#include 

extern "C"
{
#include "libavformat/avformat.h"
#include "libavutil/imgutils.h"
}

int convertVideo(const char *in_filename, const char *out_filename)
{
  // Input AVFormatContext and Output AVFormatContext
  AVFormatContext *input_format_context = avformat_alloc_context();
  AVPacket pkt;

  int ret, i;
  int frame_index = 0;
  printf("convert 0\n");
  av_register_all();
  printf("convert 1\n");
  // Input
  if ((ret = avformat_open_input(&input_format_context, in_filename, NULL,
                                 NULL)) < 0)
  {
    printf("Could not open input file.");
    return 1;
  }
  else
  {
    printf("Format %s, duration %lld us\n",
           input_format_context->iformat->long_name,
           input_format_context->duration);
  }
  printf("convert 2\n");
  if ((ret = avformat_find_stream_info(input_format_context, 0)) < 0)
  {
    printf("Failed to retrieve input stream information");
    return 1;
  }
  printf("convert 3\n");
  AVFormatContext *output_format_context = avformat_alloc_context();
  AVPacket packet;
  int stream_index = 0;
  int *streams_list = NULL;
  int number_of_streams = 0;
  int fragmented_mp4_options = 0;
  printf("convert 4\n");
  avformat_alloc_output_context2(&output_format_context, NULL, NULL,
                                 out_filename);
  if (!output_format_context)
  {
    fprintf(stderr, "Could not create output context\n");
    ret = AVERROR_UNKNOWN;
    return 1;
  }
  printf("convert 5\n");
  AVOutputFormat *fmt = av_guess_format(0, out_filename, 0);
  output_format_context->oformat = fmt;

  number_of_streams = input_format_context->nb_streams;
  streams_list =
      (int *)av_mallocz_array(number_of_streams, sizeof(*streams_list));

  if (!streams_list)
  {
    ret = AVERROR(ENOMEM);
    return 1;
  }
  for (i = 0; i < input_format_context->nb_streams; i++)
  {
    AVStream *out_stream;
    AVStream *in_stream = input_format_context->streams[i];
    AVCodecParameters *in_codecpar = in_stream->codecpar;
    if (in_codecpar->codec_type != AVMEDIA_TYPE_AUDIO &&
        in_codecpar->codec_type != AVMEDIA_TYPE_VIDEO &&
        in_codecpar->codec_type != AVMEDIA_TYPE_SUBTITLE)
    {
      streams_list[i] = -1;
      continue;
    }
    streams_list[i] = stream_index++;

    out_stream = avformat_new_stream(output_format_context, NULL);
    if (!out_stream)
    {
      fprintf(stderr, "Failed allocating output stream\n");
      ret = AVERROR_UNKNOWN;
      return 1;
    }
    ret = avcodec_parameters_copy(out_stream->codecpar, in_codecpar);
    if (ret < 0)
    {
      fprintf(stderr, "Failed to copy codec parameters\n");
      return 1;
    }
  }
  printf("convert 6\n");
  av_dump_format(output_format_context, 0, out_filename, 1);
  if (!(output_format_context->oformat->flags & AVFMT_NOFILE))
  {
    ret = avio_open(&output_format_context->pb, out_filename, AVIO_FLAG_WRITE);
    if (ret < 0)
    {
      fprintf(stderr, "Could not open output file '%s'", out_filename);
      return 1;
    }
  }
  AVDictionary *opts = NULL;
  printf("convert 7\n");
  ret = avformat_write_header(output_format_context, &opts);
  if (ret < 0)
  {
    fprintf(stderr, "Error occurred when opening output file\n");
    return 1;
  }
  int n = 0;

  while (1)
  {
    AVStream *in_stream, *out_stream;
    ret = av_read_frame(input_format_context, &packet);
    if (ret < 0)
      break;
    in_stream = input_format_context->streams[packet.stream_index];
    if (packet.stream_index >= number_of_streams ||
        streams_list[packet.stream_index] < 0)
    {
      av_packet_unref(&packet);
      continue;
    }
    packet.stream_index = streams_list[packet.stream_index];

    out_stream = output_format_context->streams[packet.stream_index];

    out_stream->codec->time_base.num = 1;
    out_stream->codec->time_base.den = 30;

    packet.pts = n * 3000;
    packet.dts = n * 3000;
    packet.duration = 3000;

    packet.pos = -1;

    ret = av_interleaved_write_frame(output_format_context, &packet);
    if (ret < 0)
    {
      fprintf(stderr, "Error muxing packet\n");
      break;
    }
    av_packet_unref(&packet);
    n++;
  }

  av_write_trailer(output_format_context);
  avformat_close_input(&input_format_context);
  if (output_format_context &&
      !(output_format_context->oformat->flags & AVFMT_NOFILE))
    avio_closep(&output_format_context->pb);
  avformat_free_context(output_format_context);
  av_freep(&streams_list);
  if (ret < 0 && ret != AVERROR_EOF)
  {
    fprintf(stderr, "Error occurred\n");
    return 1;
  }
  return 0;
}
// PyMethodDef and other orchestration code is skipped


    


    What is the reason that the code works as expected in my test_convert but not within Python ?