Recherche avancée

Médias (91)

Autres articles (67)

  • Participer à sa documentation

    10 avril 2011

    La documentation est un des travaux les plus importants et les plus contraignants lors de la réalisation d’un outil technique.
    Tout apport extérieur à ce sujet est primordial : la critique de l’existant ; la participation à la rédaction d’articles orientés : utilisateur (administrateur de MediaSPIP ou simplement producteur de contenu) ; développeur ; la création de screencasts d’explication ; la traduction de la documentation dans une nouvelle langue ;
    Pour ce faire, vous pouvez vous inscrire sur (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

Sur d’autres sites (6468)

  • How to retrieve, process and display frames from a capture device with minimal latency

    14 mars 2024, par valle

    I'm currently working on a project where I need to retrieve frames from a capture device, process them, and display them with minimal latency and compression. Initially, my goal is to maintain the video stream as close to the source signal as possible, ensuring no noticeable compression or latency. However, as the project progresses, I also want to adjust framerate and apply image compression.

    


    I have experimented using FFmpeg, since that was the first thing that came to my mind when thinking about capturing video(frames) and processing them.

    


    However I am not satisfied yet, since I am experiencing delay in the stream. (No huge delay but definately noticable)
The command that worked best so far for me :

    


    ffmpeg -rtbufsize 512M -f dshow -i video="Blackmagic WDM Capture (4)" -vf format=yuv420p -c:v libx264 -preset ultrafast -qp 0 -an -tune zerolatency -f h264 - | ffplay -fflags nobuffer -flags low_delay -probesize 32 -sync ext -

    


    I also used OBS to capture the video stream from the capture device and when looking into the preview there was no noticable delay. I then tried to simulate the exact same settings using ffmpeg :

    


    ffmpeg -rtbufsize 512M -f dshow -i video="Blackmagic WDM Capture (4)" -vf format=yuv420p -r 60 -c:v libx264 -preset veryfast -b:v 2500K -an -tune zerolatency -f h264 - | ffplay -fflags nobuffer -flags low_delay -probesize 32 -sync ext -

    


    But the delay was kind of similar to the one of the command above.
I know that OBS probably has a lot complexer stuff going on (Hardware optimization etc.) but atleast I know this way that it´s somehow possible to display the stream from the capture device without any noticable latency (On my setup).

    


    The approach that so far worked best for me (In terms of delay) was to use Python and OpenCV to read frames of the capture device and display them. I also implemented my own framerate (Not perfect I know) but when it comes to compression I am rather limited compared to FFmpeg and the frame processing is also too slow when reaching framerates about 20 fps and more.

    


    import cv2
import time

# Set desired parameters
FRAME_RATE = 15  # Framerate in frames per second
COMPRESSION_QUALITY = 25  # Compression quality for JPEG format (0-100)
COMPRESSION_FLAG = True   # Enable / Disable compression

# Set capture device index (replace 0 with the index of your capture card)
cap = cv2.VideoCapture(4, cv2.CAP_DSHOW)

# Check if the capture device is opened successfully
if not cap.isOpened():
    print("Error: Could not open capture device")
    exit()

# Create an OpenCV window
# TODO: The window is scaled to fullscreen here (The source video is 1920x1080, the display is 1920x1200)
#       I don´t know the scaling algorithm behind this, but it seems to be a simple stretch / nearest neighbor
cv2.namedWindow('Frame', cv2.WINDOW_NORMAL)
cv2.setWindowProperty('Frame', cv2.WND_PROP_FULLSCREEN, cv2.WINDOW_FULLSCREEN)

# Loop to capture and display frames
while True:
    # Start timer for each frame processing cycle
    start_time = time.time()

    # Capture frame-by-frame
    ret, frame = cap.read()

    # If frame is read correctly, proceed
    if ret:
        if COMPRESSION_FLAG:
            # Perform compression
            _, compressed_frame = cv2.imencode('.jpg', frame, [int(cv2.IMWRITE_JPEG_QUALITY), COMPRESSION_QUALITY])
            # Decode the compressed frame
            frame = cv2.imdecode(compressed_frame, cv2.IMREAD_COLOR)

        # Display the frame
        cv2.imshow('Frame', frame)

        # Calculate elapsed time since the start of this frame processing cycle
        elapsed_time = time.time() - start_time

        # Calculate available time for next frame
        available_time = 1.0 / FRAME_RATE

        # Check if processing time exceeds available time
        if elapsed_time > available_time:
            print("Warning: Frame processing time exceeds available time.")

        # Calculate time to sleep to achieve desired frame rate -> maintain a consistent frame rate
        sleep_time = 1.0 / FRAME_RATE - elapsed_time

        # If sleep time is positive, sleep to control frame rate
        if sleep_time > 0:
            time.sleep(sleep_time)

    # Break the loop if 'q' is pressed
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# Release the capture object and close the display window
cap.release()
cv2.destroyAllWindows()


    


    I also thought about getting the SDK of the capture device in order to upgrade the my performance.
But Since I am not used to low level programming but rather to scripting languages, I thought I would reach out to the StackOverflow community at first, and see if anybody has some hints to better approaches or any tips how I could increase my performance.

    


    Any Help is appreciated !

    


  • can't decode RTMP stream from adobe FMS

    25 juillet 2013, par Mike Versteeg

    I have written code to decode RTMP streams but ran into a problem decoding a stream from FMS. Same stream from Wowza server works fine, but when using Adobe FMS I
    keep getting the same error (note it works fine in a flash player).

    I can confirm the problem using ffmpeg.exe, here's the output of the latest git, anyone have an idea ?

    ffmpeg version N-54901-g55db06a Copyright (c) 2000-2013 the FFmpeg developers
     built on Jul 23 2013 18:01:29 with gcc 4.7.3 (GCC)
     configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-av
    isynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enab
    le-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetyp
    e --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --ena
    ble-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-l
    ibopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libsp
    eex --enable-libtheora --enable-libtwolame --enable-libvo-aacenc --enable-libvo-
    amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --
    enable-libxvid --enable-zlib
     libavutil      52. 40.100 / 52. 40.100
     libavcodec     55. 19.100 / 55. 19.100
     libavformat    55. 12.102 / 55. 12.102
     libavdevice    55.  3.100 / 55.  3.100
     libavfilter     3. 81.102 /  3. 81.102
     libswscale      2.  4.100 /  2.  4.100
     libswresample   0. 17.103 /  0. 17.103
     libpostproc    52.  3.100 / 52.  3.100
    Parsing...
    Parsed protocol: 0
    Parsed host    : [removed for privacy reasons]
    Parsed app     : vidlivestream/_definst_/stream
    RTMP_Connect1, ... connected, handshaking
    HandShake: Type Answer   : 03
    HandShake: Server Uptime : 506058230
    HandShake: FMS Version   : 4.5.5.1
    HandShake: Handshaking finished....
    RTMP_Connect1, handshaked
    Invoking connect
    HandleServerBW: server BW = 1250000
    HandleClientBW: client BW = 1250000 2
    HandleChangeChunkSize, received: chunk size change to 1024
    HandleCtrl, received ctrl. type: 6, len: 6
    HandleCtrl, Ping 506058630
    sending ctrl. type: 0x0007
    RTMP_ClientPacket, received: invoke 242 bytes
    (object begin)
    Property:
    Property:
    Property:
    (object begin)
    Property: 4,5,5,4013>
    Property:
    Property:
    (object end)
    Property:
    (object begin)
    Property:
    Property:
    Property:
    Property:
    Property:
    (object begin)
    Property:
    (object end)
    (object end)
    (object end)
    HandleInvoke, server invoking <_result>
    HandleInvoke, received result for method call <connect>
    sending ctrl. type: 0x0003
    Invoking createStream
    RTMP_ClientPacket, received: invoke 21 bytes
    (object begin)
    Property:
    Property:
    Property: NULL
    (object end)
    HandleInvoke, server invoking <onbwdone>
    Invoking _checkbw
    RTMP_ClientPacket, received: invoke 29 bytes
    (object begin)
    Property:
    Property:
    Property: NULL
    Property:
    (object end)
    HandleInvoke, server invoking &lt;_result>
    HandleInvoke, received result for method call <createstream>
    SendPlay, seekTime=0, stopTime=0, sending play: test
    Invoking play
    sending ctrl. type: 0x0003
    RTMP_ClientPacket, received: invoke 16419 bytes
    (object begin)
    Property:
    Property:
    Property: NULL
    Property:  K H 7 ~ + $ K Z #   ! v 1 &lt; m N % h 9 n G t % J M p 1 f # t %
    ^ u ( I ^ ) &lt; 5 : ? @ a V O &lt; S n [ * y N y T e * 3 P 1 F ! 6 #   + ( w > W \ -
    : = ` _ 6 q $ - 0 e x G . &#39; 4 [ * / 0 / &amp; _ l ] @ k 8 )v>
    Property:
    (object end)
    HandleInvoke, server invoking &lt;_onbwcheck>
    Invoking _result
    HandleChangeChunkSize, received: chunk size change to 1024
    RTMP_ClientPacket, received: invoke 142 bytes
    (object begin)
    Property:
    Property:
    Property: NULL
    Property:
    (object begin)
    Property:
    Property:
    Property:
    Property:
    (object end)
    (object end)
    HandleInvoke, server invoking <onstatus>
    HandleInvoke, onStatus: NetStream.Play.Failed
    Closing connection: NetStream.Play.Failed
    </onstatus></createstream></onbwdone></connect>

    PS : although there is some resemblance to this topic, it is very old (certainly in ffmpeg terms) and the suggestions make no difference.

  • error : ‘avcodec_send_packet’ was not declared in this scope

    4 juillet 2018, par StarShine

    The following snippet of ffmpeg-based code is building and working on Windows VC2012, VC20155, VC2017.

    With gcc on Ubuntu LTS 16.04 this is giving me issues, more specifically it does not seem to recognize avcodec_send_packet, avcodec_receive_frame and struct AVCodecParameters, and possibly more functions and structures that I’m not currently using.

    error : ‘AVCodecParameters’ was not declared in this scope
    error : ‘avcodec_send_packet’ was not declared in this scope
    error : ‘avcodec_receive_frame ’ was not declared in this scope

    The code snippet is :

    // the includes are actually in a precompiled header, included in cmake
    extern "C" {

    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavdevice></libavdevice>avdevice.h>
    #include <libavfilter></libavfilter>avfilter.h>
    #include <libpostproc></libpostproc>postprocess.h>
    #include <libswresample></libswresample>swresample.h>
    #include <libswscale></libswscale>swscale.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libavutil></libavutil>avutil.h>  
    #include <libavutil></libavutil>avassert.h>
    #include <libavutil></libavutil>avstring.h>
    #include <libavutil></libavutil>bprint.h>
    #include <libavutil></libavutil>display.h>
    #include <libavutil></libavutil>mathematics.h>  
    #include <libavutil></libavutil>imgutils.h>
    //#include <libavutil></libavutil>libm.h>
    #include <libavutil></libavutil>parseutils.h>
    #include <libavutil></libavutil>pixdesc.h>
    #include <libavutil></libavutil>eval.h>
    #include <libavutil></libavutil>dict.h>
    #include <libavutil></libavutil>opt.h>
    #include <libavutil></libavutil>cpu.h>
    #include <libavutil></libavutil>ffversion.h>
    #include <libavutil></libavutil>version.h>

    }

    //
    ...
    {
       if (av_read_frame(m_FormatContext, m_Packet) &lt; 0) {
           av_packet_unref(m_Packet);
           m_AllPacketsSent = true;
       } else {
           if (m_Packet->stream_index == m_StreamIndex) {                  
               avcodec_send_packet(m_CodecContext, m_Packet);
           }
       }
    }
    ...

    I read up on the ffmpeg history and learned that on Debian based systems at one point they followed the fork to libavutil when that came about, and then recently some of the platforms switched back to the ffmpeg branch due to the fact that ffmpeg was much more actively supported in terms of bugfixes, features and support. As a result, some of the interfaces were possibly broken.

    I’ve seen git fixes on a library called mediatombs who seem to have ecountered the same if not very similar issues with codecpar (which I initially also had and fixed the same way) :

    https://github.com/gerbera/gerbera/issues/52

    https://github.com/gerbera/gerbera/commit/32efd463f138557c54535225d84136df95bab3dd#diff-af3b638bc2a3e6c650974192a53c7291

    Here the commit seems to fix their specific issue by wrapping the codecpar field that is being renamed back to codec, which I also applied and works.

    I wonder if anyone knows which functions can be used for the errors given above, since in fact these functions are themselves replacing deprecated functionality according the ffmpeg avcodec.h header comments. (https://www.ffmpeg.org/doxygen/trunk/avcodec_8h_source.html). I hope this does not mean I would have to settle back into avcodec_encode_video2() type of functions ?

    Update :

    For reference, it seems it has also popped up here : https://github.com/Motion-Project/motion/issues/338. The issue seems to be resolved if you can rebuild your ffmpeg stack.

    Update :

    To resolve the version API mingle, I ended up wiping out any ffmpeg reference and rebuilding ffmpeg from sources. This seems to push things further along in the right direction ; I have my source compiling correctly but there is still something wrong with the way I’m linking things together.

    Also, I’m using CMake to set up my makefiles, and using find_package for some of the dependencies and handwritten find_path / find_library stuff for everything else. I’ve seen other people complain about the following linking issue, and a ton of case-specific replies but none of them really shed some light on what the actual problem is. My installed Ubuntu version of ALSA is 1.1.xx but still I get complaints about a 0.9 version I’m supposedly linking. Anyone knows what’s wrong with this ?

    Also, my libasound.so is symbol linked into libasound.so.2.0.0 if that clears anything up. (Hope that double slashed path at the end is correct also).

    /usr/bin/ld: /usr/lib/ffmpeg/libavdevice.a(alsa.o): undefined reference to symbol 'snd_pcm_hw_params_any@@ALSA_0.9' //usr/lib/x86_64-linux-gnu/libasound.so.2: