Recherche avancée

Médias (91)

Autres articles (37)

  • L’espace de configuration de MediaSPIP

    29 novembre 2010, par

    L’espace de configuration de MediaSPIP est réservé aux administrateurs. Un lien de menu "administrer" est généralement affiché en haut de la page [1].
    Il permet de configurer finement votre site.
    La navigation de cet espace de configuration est divisé en trois parties : la configuration générale du site qui permet notamment de modifier : les informations principales concernant le site (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (7640)

  • How do I get FFMPEG to build a video using the same timing as my input ?

    15 avril 2016, par Forest J. Handford

    I’m trying to create a video of screen actions a user takes by piping screenshots to FFMPEG from a C# console application. I’m sending 10 frames per second. The final video has exactly as many frames as I sent (ie : a 10 second vid has 100 frames). The time, however, of the video does not match. With the below code I get 7m 47s worth of video from 490751 ms of input. I’ve found that PTS gets me a little closer, but it feels like I’m doing something wrong.

       private const int VID_FRAME_FPS = 10;
       private const double PTS = 2.4444;

       /// <summary>
       /// Generates the Videos by gathering frames and processing via FFMPEG.
       /// Deletes the generated Frame images after successfully compiling the video.
       /// </summary>
       public static void RecordScreen(string pathToOutput)
       {
           Logger.log.Info("Launching FFMPEG ....");
           String arg = "-f image2pipe -i pipe:.bmp -filter:v \"setpts = " + PTS + " * PTS\" -r " + VID_FRAME_FPS + " -pix_fmt yuv420p -qscale:v 5 -vcodec libvpx -bufsize 30000k -y \"" + pathToOutput + "\\VidOut.webm\"";
           //String arg = "-f image2pipe -i pipe:.bmp -filter:v \"setpts = " + PTS + " * PTS\" -r " + VID_FRAME_FPS + " -pix_fmt yuv420p -qscale:v 5 -vcodec libx264 -bufsize 30000k -y \"" + pathToOutput + "\\VidOut.mp4\"";
           Process launchingFFMPEG = new Process
           {
               StartInfo = new ProcessStartInfo
               {
                   FileName = "ffmpeg",
                   Arguments = arg,
                   UseShellExecute = false,
                   CreateNoWindow = true,
                   RedirectStandardInput = true
               }
           };
           launchingFFMPEG.Start();

           System.Drawing.Image img;
           Stopwatch stopWatch = Stopwatch.StartNew(); //creates and start the instance of Stopwatch
           int sleep;

           Stopwatch vidTime = Stopwatch.StartNew();

           do
           {
               img = Capture.GetScreen();
               img.Save(launchingFFMPEG.StandardInput.BaseStream, System.Drawing.Imaging.ImageFormat.Bmp);
               img.Dispose();

               sleep = 10 * VID_FRAME_FPS - (int)stopWatch.ElapsedMilliseconds;
               if (sleep > 0)
               {
                   Logger.log.Info("Captured frame, sleeping " + sleep + " milliseconds.");
                   Thread.Sleep(sleep);
               }
               stopWatch.Restart();
           } while (workerThread.IsAlive);
           Logger.log.Debug("Video Time: " + vidTime.ElapsedMilliseconds);
           launchingFFMPEG.StandardInput.Flush();
           launchingFFMPEG.StandardInput.Close();
           launchingFFMPEG.Close();
       }

    Is there a way to do this without PTS ? If I need PTS, what is the correct value ? It seems that PTS of 2.565656 is close to correct.

    All the related documentation points to just using -r (the framerate command) but that doesn’t work (as I’m using it).

    Note : I’m only using H.264 for debugging with ffprobe, I plan to switch back to webm when this is resolved. I’m trying to avoid H.256 and MP4 patents.

  • Implementing custom h264 quantization for Ffmpeg ?

    27 février 2017, par user2989813

    I have a Raspberry Pi, and I’m livestreaming using FFmpeg. Unfortunately my wifi signal varies over the course of my stream. I’m currently using raspivid to send h264 encoded video to the stream. I have set a constant resolution and FPS, but have not set bitrate nor quantization, so they are variable.

    However, the issue is that the quantization doesn’t vary enough for my needs. If my wifi signal drops, my ffmpeg streaming speed will dip below 1.0x to 0.95xish for minutes, but my bitrate drops so slowly that ffmpeg can never make it back to 1.0x. As a result my stream will run into problems and start buffering.

    I would like the following to happen :
    If Ffmpeg (my stream command)’s reported speed goes below 1.0x (slower than realtime streaming), then increase quantization compression (lower bitrate) exponentially until Ffmpeg speed stabilizes at 1.0x. Prioritize stabilizing at 1.0x as quickly as possible.

    My understanding is that the quantization logic Ffmpeg is using should be in the h264 encoder, but I can’t find any mention of quantization at all in this github : https://github.com/cisco/openh264
    My knowledge of h264 is almost zilch, so I’m trying to figure out

    A) How does h264 currently vary the quantization during my stream, if at all ?

    B) Where is that code ?

    C) How hard is it for me to implement what I’m describing ?

    Thanks in advance !!

  • Checkinstall equivalent on Red Hat (Santiago)

    29 octobre 2013, par Dalius

    I'm not familiar with Red Hat, never used it before.

    I'm installing ffmpeg from source, following this guide https://trac.ffmpeg.org/wiki/CentosCompilationGuide

    On Debian, after using make to compile ffmpeg, I would use checkinstall to install ffmpeg for all users. How can I do the same on Red Hat ?