Recherche avancée

Médias (91)

Autres articles (56)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

Sur d’autres sites (6846)

  • Estimating number of frames and fps in opencv

    11 mai 2021, par mrgloom

    I have some .mp4 video, ffmpeg shows me this info :

    


      Duration: 00:00:07.02, start: 0.000000, bitrate: 18001 kb/s
    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuvj420p(pc, bt709/bt709/iec61966-2-1, progressive), 886x1920, 14299 kb/s, 22.54 fps, 60 tbr, 600 tbn, 1200 tbc (default)
    Metadata:
      rotate          : 270
      creation_time   : 2021-04-30T13:56:51.000000Z
      handler_name    : Core Media Video
      encoder         : 'avc1'
    Side data:
      displaymatrix: rotation of 90.00 degrees


    


    So as I understand it should be 7.02 sec * 22.54 fps 158 frames

    


    When I try to read it in opencv :

    


    def print_info_cap_reading(video_filepath):
    cap = cv2.VideoCapture(video_filepath)

    fps = cap.get(cv2.CAP_PROP_FPS)
    n_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))

    print('fps:', round(fps, 2))
    print('n_frames', n_frames)

    counter = 0
    while True:
        ret, frame = cap.read()
        if ret == False:
            break
        counter += 1

    print('counter:', counter)


    


    It shows me

    


    # fps: 22.54
# n_frames 199
# counter: 175


    


    When I tried to convert it to separate frames via ffmpeg it produce 422 frames :

    


    ffmpeg -i source1.mp4 tmp/img%03d.jpg


    


    So I wonder :

    


      

    1. Why fps is float value and not int value ?
    2. 


    3. What is the right way to estimate fps and number of frames ?
    4. 


    5. Why cv2.CAP_PROP_FRAME_COUNT in opencv and actually reading frames produce different number of frames ?
    6. 


    


    Update :

    


    -ignore_editlist 1 not helped, ffmpeg still produce 422 frames :

    


     ffmpeg -i source1.mp4 tmp1/img%03d.jpg
 ffmpeg -i source1.mp4 -ignore_editlist 1 tmp2/img%03d.jpg


    


    Here is some ffmpeg output :

    


    Output #0, image2, to 'tmp1/img%03d.jpg':
  Metadata:
    major_brand     : qt
    minor_version   : 0
    compatible_brands: qt
    com.apple.quicktime.author: ReplayKitRecording
    encoder         : Lavf58.45.100
    Stream #0:0(und): Video: mjpeg, yuvj420p(pc), 1920x886, q=2-31, 200 kb/s, 60 fps, 60 tbn, 60 tbc (default)
    Metadata:
      encoder         : Lavc58.91.100 mjpeg
      creation_time   : 2021-04-30T13:56:51.000000Z
      handler_name    : Core Media Video
    Side data:
      cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: N/A
      displaymatrix: rotation of -0.00 degrees
frame=  422 fps=224 q=24.8 Lsize=N/A time=00:00:07.03 bitrate=N/A dup=247 drop=0 speed=3.74x
video:13709kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown


    


    Update :

    


    mkdir tmp3 && ffmpeg -ignore_editlist 1 -i source1.mp4 tmp3/img%03d.jpg produce even more frames - 529.

    


    Output #0, image2, to 'tmp3/img%03d.jpg':
  Metadata:
    major_brand     : qt
    minor_version   : 0
    compatible_brands: qt
    com.apple.quicktime.author: ReplayKitRecording
    encoder         : Lavf58.45.100
    Stream #0:0(und): Video: mjpeg, yuvj420p(pc), 1920x886, q=2-31, 200 kb/s, 60 fps, 60 tbn, 60 tbc (default)
    Metadata:
      encoder         : Lavc58.91.100 mjpeg
      creation_time   : 2021-04-30T13:56:51.000000Z
      handler_name    : Core Media Video
    Side data:
      cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: N/A
      displaymatrix: rotation of -0.00 degrees
frame=  529 fps=221 q=24.8 Lsize=N/A time=00:00:08.81 bitrate=N/A dup=330 drop=0 speed=3.68x
video:16178kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown


    


  • How to retrieve, process and display frames from a capture device with minimal latency

    14 mars 2024, par valle

    I'm currently working on a project where I need to retrieve frames from a capture device, process them, and display them with minimal latency and compression. Initially, my goal is to maintain the video stream as close to the source signal as possible, ensuring no noticeable compression or latency. However, as the project progresses, I also want to adjust framerate and apply image compression.

    


    I have experimented using FFmpeg, since that was the first thing that came to my mind when thinking about capturing video(frames) and processing them.

    


    However I am not satisfied yet, since I am experiencing delay in the stream. (No huge delay but definately noticable)
The command that worked best so far for me :

    


    ffmpeg -rtbufsize 512M -f dshow -i video="Blackmagic WDM Capture (4)" -vf format=yuv420p -c:v libx264 -preset ultrafast -qp 0 -an -tune zerolatency -f h264 - | ffplay -fflags nobuffer -flags low_delay -probesize 32 -sync ext -

    


    I also used OBS to capture the video stream from the capture device and when looking into the preview there was no noticable delay. I then tried to simulate the exact same settings using ffmpeg :

    


    ffmpeg -rtbufsize 512M -f dshow -i video="Blackmagic WDM Capture (4)" -vf format=yuv420p -r 60 -c:v libx264 -preset veryfast -b:v 2500K -an -tune zerolatency -f h264 - | ffplay -fflags nobuffer -flags low_delay -probesize 32 -sync ext -

    


    But the delay was kind of similar to the one of the command above.
I know that OBS probably has a lot complexer stuff going on (Hardware optimization etc.) but atleast I know this way that it´s somehow possible to display the stream from the capture device without any noticable latency (On my setup).

    


    The approach that so far worked best for me (In terms of delay) was to use Python and OpenCV to read frames of the capture device and display them. I also implemented my own framerate (Not perfect I know) but when it comes to compression I am rather limited compared to FFmpeg and the frame processing is also too slow when reaching framerates about 20 fps and more.

    


    import cv2
import time

# Set desired parameters
FRAME_RATE = 15  # Framerate in frames per second
COMPRESSION_QUALITY = 25  # Compression quality for JPEG format (0-100)
COMPRESSION_FLAG = True   # Enable / Disable compression

# Set capture device index (replace 0 with the index of your capture card)
cap = cv2.VideoCapture(4, cv2.CAP_DSHOW)

# Check if the capture device is opened successfully
if not cap.isOpened():
    print("Error: Could not open capture device")
    exit()

# Create an OpenCV window
# TODO: The window is scaled to fullscreen here (The source video is 1920x1080, the display is 1920x1200)
#       I don´t know the scaling algorithm behind this, but it seems to be a simple stretch / nearest neighbor
cv2.namedWindow('Frame', cv2.WINDOW_NORMAL)
cv2.setWindowProperty('Frame', cv2.WND_PROP_FULLSCREEN, cv2.WINDOW_FULLSCREEN)

# Loop to capture and display frames
while True:
    # Start timer for each frame processing cycle
    start_time = time.time()

    # Capture frame-by-frame
    ret, frame = cap.read()

    # If frame is read correctly, proceed
    if ret:
        if COMPRESSION_FLAG:
            # Perform compression
            _, compressed_frame = cv2.imencode('.jpg', frame, [int(cv2.IMWRITE_JPEG_QUALITY), COMPRESSION_QUALITY])
            # Decode the compressed frame
            frame = cv2.imdecode(compressed_frame, cv2.IMREAD_COLOR)

        # Display the frame
        cv2.imshow('Frame', frame)

        # Calculate elapsed time since the start of this frame processing cycle
        elapsed_time = time.time() - start_time

        # Calculate available time for next frame
        available_time = 1.0 / FRAME_RATE

        # Check if processing time exceeds available time
        if elapsed_time > available_time:
            print("Warning: Frame processing time exceeds available time.")

        # Calculate time to sleep to achieve desired frame rate -> maintain a consistent frame rate
        sleep_time = 1.0 / FRAME_RATE - elapsed_time

        # If sleep time is positive, sleep to control frame rate
        if sleep_time > 0:
            time.sleep(sleep_time)

    # Break the loop if 'q' is pressed
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# Release the capture object and close the display window
cap.release()
cv2.destroyAllWindows()


    


    I also thought about getting the SDK of the capture device in order to upgrade the my performance.
But Since I am not used to low level programming but rather to scripting languages, I thought I would reach out to the StackOverflow community at first, and see if anybody has some hints to better approaches or any tips how I could increase my performance.

    


    Any Help is appreciated !

    


  • Get Proper Progress Updates on Two Long Waited Concurrent Processes in ASP.NET

    17 juillet 2012, par irfanmcsd

    I implemented background video processing using .net ffmpeg wrapper http://www.mediasoftpro.com with progress bar indication to calculate how much video is processed and send information to web page to update progress bar indicator. Its working fine if only single process works at a time, but in case of two concurrent processes (start two video publishing at once let say from two different computers), progress bar suddenly mixed progress status.
    Here is my code where i used static objects to properly send information of single instance to progress bar.

    static string FileName = "grey_03";
    protected void Page_Load(object sender, EventArgs e)
    {
       if (!Page.IsPostBack)
       {
           if (Request.Params["file"] != null)
           {
               FileName = Request.Params["file"].ToString();
           }
       }
    }
    public static double ProgressValue = 0;
    public static MediaHandler _mhandler = new MediaHandler();

    [WebMethod]
    public static string EncodeVideo()
    {
       // MediaHandler _mhandler = new MediaHandler();
       string RootPath = HttpContext.Current.Server.MapPath(HttpContext.Current.Request.ApplicationPath);
       _mhandler.FFMPEGPath = HttpContext.Current.Server.MapPath("~\\ffmpeg_july_2012\\bin\\ffmpeg.exe");
       _mhandler.InputPath = RootPath + "\\contents\\original";
       _mhandler.OutputPath = RootPath + "\\contents\\mp4";
       _mhandler.BackgroundProcessing = true;
       _mhandler.FileName = "Grey.avi";
       _mhandler.OutputFileName =FileName;
       string presetpath = RootPath + "\\ffmpeg_july_2012\\presets\\libx264-ipod640.ffpreset";
       _mhandler.Parameters = " -b:a 192k -b:v 500k -fpre \"" + presetpath + "\"";
       _mhandler.OutputExtension = ".mp4";
       _mhandler.VCodec = "libx264";
       _mhandler.ACodec = "libvo_aacenc";
       _mhandler.Channel = 2;
       _mhandler.ProcessMedia();
       return _mhandler.vinfo.ErrorCode.ToString();
    }

    [WebMethod]
    public static string GetProgressStatus()
    {
       return Math.Round(_mhandler.vinfo.ProcessingCompleted, 2).ToString();
       // if vinfo.processingcomplete==100, then you can get complete information from vinfo object and store it in database and perform other processing.
    }

    Here is jquery functions responsible for updating progress bar indication after every second etc.

    $(function () {
            $("#vprocess").on({
                click: function (e) {
                    ProcessEncoding();
                    var IntervalID = setInterval(function () {
                        GetProgressValue(IntervalID);
                    }, 1000);
                    return false;
                }
            }, '#btn_process');

        });
        function GetProgressValue(intervalid) {
            $.ajax({
                type: "POST",
                url: "concurrent_03.aspx/GetProgressStatus",
                data: "{}",
                contentType: "application/json; charset=utf-8",
                dataType: "json",
                success: function (msg) {
                    // Do something interesting here.
                    $("#pstats").text(msg.d);
                    $("#pbar_int_01").attr('style', 'width: ' + msg.d + '%;');
                    if (msg.d == "100") {
                        $('#pbar01').removeClass("progress-danger");
                        $('#pbar01').addClass("progress-success");
                        if (intervalid != 0) {
                            clearInterval(intervalid);
                        }
                        FetchInfo();
                    }
                }
            });
        }

    The problem arises due to static mediahandler object

    public static MediaHandler _mhandler = new MediaHandler();

    I need a way to keep two concurrent processes information separate from each other in order to update progress bar with value exactly belong to that process.