Recherche avancée

Médias (0)

Mot : - Tags -/optimisation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (11)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

Sur d’autres sites (2251)

  • Using ffmpeg.exe making the computer slow in task manager it takes over 1GB of memory how can i fix it ?

    29 mai 2013, par Revuen Ben Dror

    In mt Form1 i have this code :

    private void StartRecording_Click(object sender, EventArgs e)
           {

               ffmp.Start("test.avi", 25);
               timer1.Enabled = true;
           }

    ffmp is a variable of my class : Ffmpeg
    In this class i add frames to a pipe and create an avi file.

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.Threading.Tasks;
    using System.Drawing;
    using System.IO.Pipes;
    using System.Runtime.InteropServices;
    using System.Diagnostics;

    namespace ScreenVideoRecorder
    {
       class Ffmpeg
       {
           NamedPipeServerStream p;
           String pipename = "mytestpipe";
           byte[] b;
           System.Diagnostics.Process process;

           public Ffmpeg()
           {

           }

           public void Start(string FileName, int BitmapRate)
           {
               p = new NamedPipeServerStream(pipename, PipeDirection.Out, 1, PipeTransmissionMode.Byte);
               b = new byte[1920 * 1080 * 3]; // some buffer for the r g and b of pixels of an image of size 720p
               process = new System.Diagnostics.Process();
               process.StartInfo.FileName = @"D:\pipetest\pipetest\ffmpegx86\ffmpeg.exe";
               process.EnableRaisingEvents = false;
               process.StartInfo.WorkingDirectory = @"D:\pipetest\pipetest\ffmpegx86";
               process.StartInfo.Arguments = @"-f rawvideo -pix_fmt bgr0 -video_size 1920x1080 -i \\.\pipe\mytestpipe -map 0 -c:v libx264 -r " + BitmapRate + " " + FileName;
               process.Start();

               process.StartInfo.UseShellExecute = false;
               process.StartInfo.CreateNoWindow = false;

               p.WaitForConnection();
           }

           public void PushFrame(Bitmap bmp)
           {

               int length;
               // Lock the bitmap's bits.
               Rectangle rect = new Rectangle(0, 0, bmp.Width, bmp.Height);
               //Rectangle rect = new Rectangle(0, 0, 1280, 720);
               System.Drawing.Imaging.BitmapData bmpData =
                   bmp.LockBits(rect, System.Drawing.Imaging.ImageLockMode.ReadOnly,
                   bmp.PixelFormat);

               int absStride = Math.Abs(bmpData.Stride);
               // Get the address of the first line.
               IntPtr ptr = bmpData.Scan0;

               // Declare an array to hold the bytes of the bitmap.
               //length = 3 * bmp.Width * bmp.Height;
               length = absStride * bmpData.Height;
               byte[] rgbValues = new byte[length];

               int j = bmp.Height - 1;
               for (int i = 0; i < bmp.Height; i++)
               {
                   IntPtr pointer = new IntPtr(bmpData.Scan0.ToInt32() + (bmpData.Stride * j));
                   System.Runtime.InteropServices.Marshal.Copy(pointer, rgbValues, absStride * (bmp.Height - i - 1), absStride);
                   j--;
               }

               p.Write(rgbValues, 0, length);

               bmp.UnlockBits(bmpData);

           public void Close()
           {
               p.Close();
           }
       }
    }

    The problem is when i'm running my application in this from my visual studio 2012 pro and click the button it's openning a console window and start the processing .

    I tracked over it through the task manager on ffmpeg.exe and saw that it started from 996mb and very quick jumped ot 1040mb memory usage. The cpu usage was only 16%

    Once i close ended this task the ffmpeg.exe everything was back to move smooth.
    When it's working and i tried to drag my Form around the screen for example it was moving slow and also with some stuttering .

    Closed the ffmpeg.exe and i could drag the Form around smooth and quick like in regular way as it should be.

    I tried to google for it and found some others with the same problem i think but i'm not sure where is the problem and how to fix it.

    I'm not sure what version of ffmpeg.exe i have i'm using now but i read in some places that it's not working better with new versions but maybe i mistake here .

    My windows is 8 with 6gb of ram .

  • avformat/matroskaenc : Actually apply timestamp offset for Opus

    31 août 2022, par Andreas Rheinhardt
    avformat/matroskaenc : Actually apply timestamp offset for Opus
    

    Matroska generally requires timestamps to be nonnegative, but
    there is an exception : Data that corresponds to encoder delay
    and is not supposed to be output anyway can have a negative
    timestamp. This is achieved by using the CodecDelay header
    field : The demuxer has to subtract this value from the raw
    (nonnegative) timestamps of the corresponding track.
    Therefore the muxer has to add this value first to write
    this raw timestamp.

    Support for writing CodecDelay has been added in FFmpeg commit
    d92b1b1babe69268971863649c225e1747358a74 and in Libav commit
    a1aa37dd0b96710d4a17718198a3f56aea2040c1. The former simply
    wrote the header field and did not apply any timestamp offsets,
    leading to desynchronisation (if one uses multiple tracks).
    The latter applied it at two places, but not at the one where
    it actually matters, namely in mkv_write_block(), leading to
    the same desynchronisation as with the former commit. It furthermore
    used the wrong stream timebase to convert the delay to the
    stream's timebase, as the conversion used the timebase from
    before avpriv_set_pts_info().

    When the latter was merged in 82e4f39883932c1b1e5c7792a1be12dec6ab603d,
    it was only done in a deactivated state that still did not
    offset the timestamps when muxing due to "assertion failures
    and av sync errors". a1aa37dd0b96710d4a17718198a3f56aea2040c1
    made it definitely more likely to run into assertion failures
    (namely if the relative block timestamp doesn't fit into an int16_t).

    Yet all of the above issues have been fixed (in commits
    962d63157322466a9a82f9f9d84c1b6f1b582f65,
    5d3953a5dcfd5f71391b7f34908517eb6f7e5146 and
    4ebeab15b037a21f195696cef1f7522daf42f3ee. This commit therefore
    enables applying CodecDelay, fixing ticket #7182.

    There is just one slight regression from this : If one has input
    with encoder delay where the first timestamp is negative, but
    the pts of the part of the data that is actually intended to be
    output is nonnegative, then the timestamps will currently by default
    be shifted to make them nonnegative before they reach the muxer ;
    the muxer will then ensure that the shifted timestamps are retained.
    Before this commit, the muxer did not ensure this ; instead the
    timestamps that the demuxer will output were shifted and
    if the first timestamp of the actually intended output was zero
    before shifting, then this unintentional shift just cancels
    the shift performed before the packet reached the muxer.
    (But notice that this only applies if all the tracks use the same
    CodecDelay, or the relative sync between tracks will be impaired.)
    This happens in the matroska-opus-remux and matroska-ogg-opus-remux
    FATE tests. Future commits will forward the information that
    the Matroska muxer has a limited capability to handle negative
    timestamps so that the shifting in libavformat can take advantage
    of it.

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>

    • [DH] libavformat/matroskaenc.c
    • [DH] tests/ref/fate/matroska-ogg-opus-remux
    • [DH] tests/ref/fate/matroska-opus-remux
  • Live AAC and H264 data into live stream

    10 mai 2024, par tzuleger

    I have a remote camera that captures H264 encoded video data and AAC encoded audio data, places the data into a custom ring buffer, which then is sent to a Node.js socket server, where the packet of information is detected as audio or video and then handled accordingly. That data should turn into a live stream, the protocol doesn't matter, but the delay has to be around 4 seconds and can be played on iOS and Android devices.

    &#xA;

    After reading hundreds of pages of documentation, questions, or solutions on the internet, I can't seem to find anything about handling two separate streams of AAC and H264 data to create a live stream.

    &#xA;

    Despite attempting many different ways of achieving this goal, even having a working implementation of HLS, I want to revisit ALL options of live streaming, and I am hoping someone out there can give me advice or guidance to specific documentation on how to achieve this goal.

    &#xA;

    To be specific, this is our goal :

    &#xA;

      &#xA;
    • Stream AAC and H264 data from remote cellular camera to a server which will do some work on that data to live stream to one user (possibly more users in the future) on a mobile iOS or Android device
    • &#xA;

    • Delay of the live stream should be a maximum of 4 seconds, if the user has bad signal, then a longer delay is okay, as we obviously cannot do anything about that.
    • &#xA;

    • We should not have to re-encode our data. We've explored WebRTC, but that requires OPUS audio packets and thus requires us to re-encode the data, which would be expensive for our server to run.
    • &#xA;

    &#xA;

    Any and all help, ranging from re-visiting an old approach we took to exploring new ones, is appreciated.

    &#xA;

    I can provide code snippets as well for our current implementation of LLHLS if it helps, but I figured this post is already long enough.

    &#xA;

    I've tried FFmpeg with named pipes, I expected it to just work, but FFmpeg kept blocking on the first named pipe input. I thought of just writing the data out to two files and then using FFmpeg, but it's continuous data and I don't have enough knowledge on FFmpeg on how I could use that type of implementation to create one live stream.

    &#xA;

    I've tried implementing our own RTSP server on the camera using Gstreamer (our camera had its RTSP server stripped out, wasn't my call) but the camera's flash storage cannot handle having GStreamer on it, so that wasn't an option.

    &#xA;

    My latest attempt was using a derivation of hls-parser to create an HLS manifest and mux.js to create MP4 containers for .m4s fragmented mp4 files and do an HLS live stream. This was my most successful attempt, where we successfully had a live stream going, but the delay was up to 16 seconds, as one would expect with HLS live streaming. We could drop the target duration down to 2 seconds and get about 6-8 seconds delay, but this could be unreliable, as these cameras could have no signal making it relatively expensive to send so many IDR frames with such low bandwidth.

    &#xA;

    With the delay being the only factor left, I attempted to upgrade the implementation to support Apple's Low Latency HLS. It seems to work, as the right partial segments are getting requested and everything that makes LLHLS is working as intended, but the delay isn't going down when played on iOS' native AVPlayer, as a matter of fact, it looks like it worsened.

    &#xA;

    I would also like to disclaim, my knowledge on media streaming is fairly limited. I've learned most of what I speak of in this post over the past 3 months by reading RFCs, documentation, and stackoverflow/reddit questions and answers. If anything appears to be confusing, it might be just my lack of understanding of it.

    &#xA;