Recherche avancée

Médias (91)

Autres articles (48)

  • Les notifications de la ferme

    1er décembre 2010, par

    Afin d’assurer une gestion correcte de la ferme, il est nécessaire de notifier plusieurs choses lors d’actions spécifiques à la fois à l’utilisateur mais également à l’ensemble des administrateurs de la ferme.
    Les notifications de changement de statut
    Lors d’un changement de statut d’une instance, l’ensemble des administrateurs de la ferme doivent être notifiés de cette modification ainsi que l’utilisateur administrateur de l’instance.
    À la demande d’un canal
    Passage au statut "publie"
    Passage au (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (5163)

  • copy .wav audio file settings to new .wav file

    18 novembre 2020, par Jonas

    currently I am working with a speech to text translation model that takes a .wav file and turns the audible speech within the audio into a text transcript. The model worked before on .wav audio recordings that were recorded directly. However now I am trying to do the same with audio that was at first present within a video.

    


    The steps are as follows :

    


      

    • retrieve a video file from a stream url through ffmpeg
    • 


    • strip the .aac audio from the video
    • 


    • convert the .aac audio to .wav
    • 


    • save the .wav to s3 for later usage
    • 


    


    The ffmpeg command I use is listed below for reference :

    


      rm /tmp/jonas/*
  ffmpeg -i {stream_url} -c copy -bsf:a aac_adtstoasc /tmp/jonas/{filename}.aac
  ffmpeg -i /tmp/jonas/{filename}.aac /tmp/jonas/{filename}.wav
  aws s3 cp /tmp/jonas/{filename}.wav {s3_audio_save_location}


    


    The problem now is that my speech to text model does not work on this audio anymore. I use sox to convert the audio but sox does not seem to grab the audio. Also without sox the model does not work. This leads me to believe there is a difference in the .wav audio formatting and therefore I would like to know how I can either format the .wav with the same settings as a .wav that does work or find a way to compare the .wav audio formatting and set the new .wav to the correct format manually through ffmpeg

    


    I tried with PyPy exiftool and found the metadata of the two files :

    


    The metadata of the working .wav file is enter image description here

    


    The metadata of the .wav file that does not work is enter image description here

    


    So as can be seen the working .wav file has some different settings that I would like to mimic in the second .wav file presumably that would make my model work again :)

    


    with kind regards,
Jonas

    


  • FFmpeg AutoGen - set encoding settings

    6 décembre 2018, par user4964351

    I’m using FFmpeg auto generated unsafe bindings to capture rtsp (h264) stream like so :

           AVFormatContext* input_context = ffmpeg.avformat_alloc_context();

           AVDictionary* opts = null;
           ffmpeg.av_dict_set(&opts, "rtsp_transport", "tcp", 0); // set tcp transport

           // try connect to rtsp stream
           if (ffmpeg.avformat_open_input(&input_context, "rtsp://...", null, &opts) != 0)
           {
               return;
           }

           if (ffmpeg.avformat_find_stream_info(input_context, null) < 0)
           {
               return;
           }

           // search video stream
           int video_stream_index = 0;
           for (int i = 0; i < input_context->nb_streams; i++)
           {
               if (input_context->streams[i]->codec->codec_type == AVMediaType.AVMEDIA_TYPE_VIDEO)
                   video_stream_index = i;
           }

           AVPacket packet;
           ffmpeg.av_init_packet(&packet);

           // create output file
           AVOutputFormat* format = ffmpeg.av_guess_format("mp4", null, null);
           AVFormatContext* output_context = null;
           ffmpeg.avformat_alloc_output_context2(&output_context, format, null, null);
           output_context->oformat = format;
           ffmpeg.avio_open2(&output_context->pb, "test.mp4", ffmpeg.AVIO_FLAG_WRITE, null, null);

            // start playing input stream
           ffmpeg.av_read_play(input_context);
           AVStream* stream = null;
           double stream_base_time_double = 0;

           var cnt = 0;

           // start reading packets from stream and write them to file
           while (ffmpeg.av_read_frame(input_context, &packet) == 0 && cnt < 200)
           {
               if (packet.stream_index == video_stream_index)
               {
                   if (stream == null) // create stream in output file
                   {
                       stream = ffmpeg.avformat_new_stream(output_context, input_context->streams[video_stream_index]->codec->codec);

                       // copy params from input stream
                       ffmpeg.avcodec_parameters_from_context(stream->codecpar, input_context->streams[video_stream_index]->codec);
                       stream->sample_aspect_ratio = input_context->streams[video_stream_index]->codec->sample_aspect_ratio;
                       // ffmpeg.av_opt_set_int(stream->codec, "crf", 25, 0); // this does not work
                       ffmpeg.avformat_write_header(output_context, null);
                   }
                   packet.stream_index = stream->id;
                   ffmpeg.av_interleaved_write_frame(output_context, &packet);
                   cnt++;
               }
               ffmpeg.av_packet_unref(&packet);
               ffmpeg.av_init_packet(&packet);
           }

           ffmpeg.av_read_pause(input_context);
           ffmpeg.av_write_trailer(output_context);
           ffmpeg.avio_close(output_context->pb);
           ffmpeg.avformat_close_input(&input_context);
           ffmpeg.avformat_free_context(input_context);
           ffmpeg.avformat_free_context(output_context);

    This code is working creating small test.mp4 video file.

    Now, i want to reduce file size a bit. For that i want to use crf flag.

    I can do that from command line :

    ffmpeg -rtsp_transport tcp -i rtsp://184.72.239.149/vod/mp4:BigBuckBunny_175k.mov -crf 25 F:\garbage\test.mp4

    But i can’t make it work from code.
    So far i tried setting crf like so :

    ffmpeg.av_opt_set_int(stream->codec, "crf", 25, 0);

    and

    ffmpeg.av_opt_set_int(stream->codec->priv_data, "crf", 25, 0);

    But it does nothing.

    Question : How can i set the crf option from code ?

    I know that i’m missing something obviously, but i just can’t figure it myself.

    Any help would be appreciated.

  • How can I use if/else with FFmpeg to check if a video is portrait or landscape and execute different settings ?

    27 septembre 2023, par Maaaaaars

    I'm using FFmpeg to generate previews for the videos added to my website. The site script's (KVS) built-in tools help to some extent by automatically creating previews with the following properties :

    


      

    • Max. duration : 15s
    • 


    • Offset from beginning : 10%
    • 


    • Offset from end : 10%
    • 


    • Fragments : 5 (each lasting about 3s)
    • 


    • Crossfade between fragments : 1s
    • 


    


    However, I don't know how to recreate those through the FFmpeg command line, in addition to adding an if/else part that will check if the video is landscape or portrait.

    


    If the video is landscape, the previous properties should execute and generate the preview.

    


    If the video is portrait, I'd like to execute the previously-listed properties as well as additional ones to insert a blurred video in the background.

    


    I'm using the following command line for that, and it generates exactly what I had in mind.

    


    ffmpeg -i source-video.mp4 -vf split[original][copy];[copy]scale=ih*16/9:-1,crop=h=iw*9/16,gblur=sigma=20[blurred];[blurred][original]overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2 new-video.mp4


    


    I'm also using the following flags, which I'd like to apply to the preview video regardless of orientation.

    


    -c:v libx264 -c:a copy -movflags +faststart -preset veryfast -strict 0 -f mp4


    


    I found several unrelated solutions on if/else but none of them really match what I'm trying to achieve and only add to the confusion.

    


    Note : I am not using PHP, Python, or otherwise, and would like to know whether it's achievable only through the FFmpeg command line itself.