Recherche avancée

Médias (91)

Autres articles (80)

  • Le plugin : Gestion de la mutualisation

    2 mars 2010, par

    Le plugin de Gestion de mutualisation permet de gérer les différents canaux de mediaspip depuis un site maître. Il a pour but de fournir une solution pure SPIP afin de remplacer cette ancienne solution.
    Installation basique
    On installe les fichiers de SPIP sur le serveur.
    On ajoute ensuite le plugin "mutualisation" à la racine du site comme décrit ici.
    On customise le fichier mes_options.php central comme on le souhaite. Voilà pour l’exemple celui de la plateforme mediaspip.net :
    < ?php (...)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (8623)

  • pyqt5 gui dependent on ffmpeg compiled with pyinstaller doesn't run on other machines ?

    19 octobre 2022, par Soren

    I am trying to create a simple Pyqt5 GUI for Windows 10 that uses OpenAI's model Whisper to transcribe a sound file and outputting the results in an Excel-file. It works on my own computer where I have installed the necessary dependencies for Whisper as stated on their github i.e. FFMEG. I provide a minimal example of my code below :

    &#xA;

    # Import library&#xA;import whisper&#xA;import os&#xA;from PyQt5 import QtCore, QtGui, QtWidgets&#xA;import pandas as pd&#xA;import xlsxwriter&#xA;&#xA;&#xA;class Ui_Dialog(QtWidgets.QDialog):&#xA;    &#xA;    &#xA;    # Define functions to use in GUI&#xA;   &#xA;    # Define function for selecting input files&#xA;    def browsefiles(self, Dialog):&#xA;      &#xA;       &#xA;       # Make Dialog box and save files into tuple of paths&#xA;       files = QtWidgets.QFileDialog().getOpenFileNames(self, "Select soundfiles", os.getcwd(), "lyd(*mp2 *.mp3 *.mp4 *.m4a *wma *wav)")&#xA;       &#xA;       self.liste = []&#xA;       for url in range(len(files[0])):&#xA;           self.liste.append(files[0][url])   &#xA;&#xA;    &#xA;    def model_load(self, Dialog):&#xA;               &#xA;        # Load picked model&#xA;        self.model = whisper.load_model(r&#x27;C:\Users\S&#xF8;ren\Downloads\Whisper_gui\models&#x27; &#x2B; "\\" &#x2B; self.combo_modelSize.currentText() &#x2B; ".pt") ##the path is set to where the models are on the other machine&#xA;        &#xA;    &#xA;    def run(self, Dialog):&#xA;                &#xA;        # Make list for sound files&#xA;        liste_df = []&#xA;        &#xA;        &#xA;        # Running loop for interpreting and encoding sound files&#xA;        for url in range(len(self.liste)):&#xA;                          &#xA;            # Make dataframe&#xA;            df = pd.DataFrame(columns=["filename", "start", "end", "text"])&#xA;            &#xA;            # Run model&#xA;            result = self.model.transcribe(self.liste[url])&#xA;                            &#xA;            # Extract results&#xA;            for i in range(len(result["segments"])):&#xA;                start = result["segments"][i]["start"]&#xA;                end = result["segments"][i]["end"]&#xA;                text = result["segments"][i]["text"]&#xA;                &#xA;                df = df.append({"filename": self.liste[url].split("/")[-1],&#xA;                            "start": start, &#xA;                            "end": end, &#xA;                            "text": text}, ignore_index=True)&#xA;            &#xA;            # Add detected language to dataframe&#xA;            df["sprog"] = result["language"]&#xA;            &#xA;            &#xA;            liste_df.append(df)&#xA;        &#xA;        &#xA;        &#xA;        # Make excel output&#xA;        &#xA;        # Concatenate list of dfs&#xA;        dataframe = pd.concat(liste_df)&#xA;        &#xA;        &#xA;        # Create a Pandas Excel writer using XlsxWriter as the engine.&#xA;        writer = pd.ExcelWriter(self.liste[0].split(".")[0] &#x2B; &#x27;_OUTPUT.xlsx&#x27;, engine=&#x27;xlsxwriter&#x27;)&#xA;        writer_wrap_format = writer.book.add_format({"text_wrap": True, &#x27;num_format&#x27;: &#x27;@&#x27;})&#xA;&#xA;&#xA;        # Write the dataframe data to XlsxWriter. Turn off the default header and&#xA;        # index and skip one row to allow us to insert a user defined header.&#xA;        dataframe.to_excel(writer, sheet_name="Output", startrow=1, header=False, index=False)&#xA;&#xA;        # Get the xlsxwriter workbook and worksheet objects.&#xA;        #workbook = writer.book&#xA;        worksheet = writer.sheets["Output"]&#xA;&#xA;        # Get the dimensions of the dataframe.&#xA;        (max_row, max_col) = dataframe.shape&#xA;&#xA;        # Create a list of column headers, to use in add_table().&#xA;        column_settings = [{&#x27;header&#x27;: column} for column in dataframe.columns]&#xA;&#xA;        # Add the Excel table structure. Pandas will add the data.&#xA;        worksheet.add_table(0, 0, max_row, max_col - 1, {&#x27;columns&#x27;: column_settings})&#xA;&#xA;        # Make the columns wider for clarity.&#xA;        worksheet.set_column(0, max_col - 1, 12)&#xA;        &#xA;        in_col_no = xlsxwriter.utility.xl_col_to_name(dataframe.columns.get_loc("text"))&#xA;        &#xA;        worksheet.set_column(in_col_no &#x2B; ":" &#x2B; in_col_no, 30, writer_wrap_format)&#xA;&#xA;        # Close the Pandas Excel writer and output the Excel file.&#xA;        writer.save()&#xA;        writer.close()&#xA;    &#xA;    &#xA;    ## Design setup&#xA;    &#xA;    def setupUi(self, Dialog):&#xA;        Dialog.setObjectName("Dialog")&#xA;        Dialog.resize(730, 400)&#xA;        &#xA;        self.select_files = QtWidgets.QPushButton(Dialog)&#xA;        self.select_files.setGeometry(QtCore.QRect(40, 62, 81, 31))&#xA;        font = QtGui.QFont()&#xA;        font.setPointSize(6)&#xA;        self.select_files.setFont(font)&#xA;        self.select_files.setObjectName("select_files")&#xA;        &#xA;    &#xA;               &#xA;        &#xA;        self.combo_modelSize = QtWidgets.QComboBox(Dialog)&#xA;        self.combo_modelSize.setGeometry(QtCore.QRect(40, 131, 100, 21))&#xA;        font = QtGui.QFont()&#xA;        font.setPointSize(6)&#xA;        self.combo_modelSize.setFont(font)&#xA;        self.combo_modelSize.setObjectName("combo_modelSize")&#xA;               &#xA;        &#xA;        self.runButton = QtWidgets.QPushButton(Dialog)&#xA;        self.runButton.setGeometry(QtCore.QRect(40, 289, 71, 21))&#xA;        font = QtGui.QFont()&#xA;        font.setPointSize(6)&#xA;        self.runButton.setFont(font)&#xA;        self.runButton.setObjectName("runButton")&#xA;        &#xA;        &#xA;       &#xA;&#xA;        self.retranslateUi(Dialog)&#xA;        QtCore.QMetaObject.connectSlotsByName(Dialog)&#xA;        &#xA;        &#xA;        &#xA;        modelSize_options = [&#x27;Chose model&#x27;, &#x27;tiny&#x27;, &#x27;base&#x27;, &#x27;small&#x27;, &#x27;medium&#x27;, &#x27;large&#x27;]&#xA;        self.combo_modelSize.addItems(modelSize_options)&#xA;        &#xA;        # Do an action!&#xA;        self.select_files.clicked.connect(self.browsefiles)&#xA;        self.combo_modelSize.currentIndexChanged.connect(self.model_load)&#xA;        self.runButton.clicked.connect(self.run)&#xA;        &#xA;        &#xA;        &#xA;    &#xA;&#xA;    def retranslateUi(self, Dialog):&#xA;        _translate = QtCore.QCoreApplication.translate&#xA;        Dialog.setWindowTitle(_translate("Dialog", "Dialog"))&#xA;        self.runButton.setText(_translate("Dialog", "Go!"))&#xA;        self.select_files.setText(_translate("Dialog", "Select"))&#xA;&#xA;&#xA;if __name__ == "__main__":&#xA;    import sys&#xA;    app = QtWidgets.QApplication(sys.argv)&#xA;    Dialog = QtWidgets.QDialog()&#xA;    ui = Ui_Dialog()&#xA;    ui.setupUi(Dialog)&#xA;    Dialog.show()&#xA;    sys.exit(app.exec_())&#xA;

    &#xA;

    I compile this app with pyinstaller using the following code. I had some issues to begin with so I found other with similar problems and ended up with this :

    &#xA;

    pyinstaller --onedir --hidden-import=pytorch --collect-data torch --copy-metadata torch --copy-metadata tqdm --copy-metadata tokenizers --copy-metadata importlib_metadata --hidden-import="sklearn.utils._cython_blas" --hidden-import="sklearn.neighbors.typedefs" --hidden-import="sklearn.neighbors.quad_tree" --hidden-import="sklearn.tree" --hidden-import="sklearn.tree._utils" --copy-metadata regex --copy-metadata requests --copy-metadata packaging --copy-metadata filelock --copy-metadata numpy --add-data "./ffmpeg/*;./ffmpeg/" --hidden-import=whisper --copy-metadata whisper --collect-data whisper minimal_example_whisper.py

    &#xA;

    When I take the outputtet dist directory and try to run the app on another Windows machine without FFMPEG installed (or Whisper or any other things), I get the following error from the terminal as I push the "run" button in the app (otherwise the app does run).

    &#xA;

    C:\Users\S&#xF8;ren>"G:\minimal_example_whisper\minimal_example_whisper.exe"&#xA;whisper\transcribe.py:70: UserWarning: FP16 is not supported on CPU; using FP32 instead&#xA;Traceback (most recent call last):&#xA;  File "minimal_example_whisper.py", line 45, in run&#xA;  File "whisper\transcribe.py", line 76, in transcribe&#xA;  File "whisper\audio.py", line 111, in log_mel_spectrogram&#xA;  File "whisper\audio.py", line 42, in load_audio&#xA;  File "ffmpeg\_run.py", line 313, in run&#xA;  File "ffmpeg\_run.py", line 284, in run_async&#xA;  File "subprocess.py", line 951, in __init__&#xA;  File "subprocess.py", line 1420, in _execute_child&#xA;FileNotFoundError: [WinError 2] Den angivne fil blev ikke fundet&#xA;

    &#xA;

    I suspect this has something to do with FFMPEG not being installed on the other machines system ? Does anyone have an automatic solution for this when compiling the app or can it simply only run on machines that has FFMPEG installed ?

    &#xA;

    Thanks in advance !

    &#xA;

  • ffmpeg piped output producing incorrect metadata frame count

    8 décembre 2024, par Xorgon

    The short version : Using piped output from ffmpeg produces a file with incorrect metadata.

    &#xA;

    ffmpeg -y -i .\test_mp4.mp4 -f avi -c:v libx264 - > output.avi to make an AVI file using the pipe output.

    &#xA;

    ffprobe -v error -count_frames -show_entries stream=duration,nb_read_frames,r_frame_rate .\output.avi

    &#xA;

    The output will show that the metadata does not match the actual frames contained in the video.

    &#xA;

    Details below.

    &#xA;


    &#xA;

    Using Python, I am attempting to use ffmpeg to compress videos and put them in a PowerPoint. This works great, however, the video files themselves have incorrect frame counts which can cause issues when I read from those videos in other code.

    &#xA;

    Edit for clarification : by "frame count" I mean the metadata frame count. The actual number of frames contained in the video is correct, but querying the metadata gives an incorrect frame count.

    &#xA;

    Having eliminated the PowerPoint aspect of the code, I've narrowed this down to the following minimal reproducing example of saving an output from an ffmpeg pipe :

    &#xA;

    from subprocess import Popen, PIPE&#xA;&#xA;video_path = &#x27;test_mp4.mp4&#x27;&#xA;&#xA;ffmpeg_pipe = Popen([&#x27;ffmpeg&#x27;,&#xA;                     &#x27;-y&#x27;,  # Overwrite files&#xA;                     &#x27;-i&#x27;, f&#x27;{video_path}&#x27;,  # Input from file&#xA;                     &#x27;-f&#x27;, &#x27;avi&#x27;,  # Output format&#xA;                     &#x27;-c:v&#x27;, &#x27;libx264&#x27;,  # Codec&#xA;                     &#x27;-&#x27;],  # Output to pipe&#xA;                    stdout=PIPE)&#xA;&#xA;new_path = "piped_video.avi"&#xA;vid_file = open(new_path, "wb")&#xA;vid_file.write(ffmpeg_pipe.stdout.read())&#xA;vid_file.close()&#xA;

    &#xA;

    I've tested several different videos. One small example video that I've tested can be found here.

    &#xA;

    I've tried a few different codecs with avi format and tried libvpx with webm format. For the avi outputs, the frame count usually reads as 1073741824 (2^30). Weirdly, for the webm format, the frame count read as -276701161105643264.

    &#xA;

    Edit : This issue can also be reproduced with just ffmpeg in command prompt using the following command :&#xA;ffmpeg -y -i .\test_mp4.mp4 -f avi -c:v libx264 - > output.avi

    &#xA;

    This is a snippet I used to read the frame count, but one could also see the error by opening the video details in Windows Explorer and seeing the total time as something like 9942 hours, 3 minutes, and 14 seconds.

    &#xA;

    import cv2&#xA;&#xA;video_path = &#x27;test_mp4.mp4&#x27;&#xA;new_path = "piped_video.webm"&#xA;&#xA;cap = cv2.VideoCapture(video_path)&#xA;print(f"Original video frame count: = {int(cap.get(cv2.CAP_PROP_FRAME_COUNT)):d}")&#xA;cap.release()&#xA;&#xA;cap = cv2.VideoCapture(new_path)&#xA;print(f"Piped video frame count: = {int(cap.get(cv2.CAP_PROP_FRAME_COUNT)):d}")&#xA;cap.release()&#xA;

    &#xA;

    The error can also be observed using ffprobe with the following command : ffprobe -v error -count_frames -show_entries stream=duration,nb_read_frames,r_frame_rate .\output.avi. Note that the frame rate and number of frames counted by ffprobe do not match with the duration from the metadata.

    &#xA;

    For completeness, here is the ffmpeg output :

    &#xA;

    ffmpeg version 2023-06-11-git-09621fd7d9-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers&#xA;  built with gcc 12.2.0 (Rev10, Built by MSYS2 project)&#xA;  configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint&#xA;  libavutil      58. 13.100 / 58. 13.100&#xA;  libavcodec     60. 17.100 / 60. 17.100&#xA;  libavformat    60.  6.100 / 60.  6.100&#xA;  libavdevice    60.  2.100 / 60.  2.100&#xA;  libavfilter     9.  8.101 /  9.  8.101&#xA;  libswscale      7.  3.100 /  7.  3.100&#xA;  libswresample   4. 11.100 /  4. 11.100&#xA;  libpostproc    57.  2.100 / 57.  2.100&#xA;Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;test_mp4.mp4&#x27;:&#xA;  Metadata:&#xA;    major_brand     : mp42&#xA;    minor_version   : 0&#xA;    compatible_brands: isommp42&#xA;    creation_time   : 2022-08-10T12:54:09.000000Z&#xA;  Duration: 00:00:06.67, start: 0.000000, bitrate: 567 kb/s&#xA;  Stream #0:0[0x1](eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 384x264 [SAR 1:1 DAR 16:11], 563 kb/s, 30 fps, 30 tbr, 30k tbn (default)&#xA;    Metadata:&#xA;      creation_time   : 2022-08-10T12:54:09.000000Z&#xA;      handler_name    : Mainconcept MP4 Video Media Handler&#xA;      vendor_id       : [0][0][0][0]&#xA;      encoder         : AVC Coding&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))&#xA;Press [q] to stop, [?] for help&#xA;[libx264 @ 0000018c68c8b9c0] using SAR=1/1&#xA;[libx264 @ 0000018c68c8b9c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2&#xA;[libx264 @ 0000018c68c8b9c0] profile High, level 2.1, 4:2:0, 8-bit&#xA;Output #0, avi, to &#x27;pipe:&#x27;:&#xA;  Metadata:&#xA;    major_brand     : mp42&#xA;    minor_version   : 0&#xA;    compatible_brands: isommp42&#xA;    ISFT            : Lavf60.6.100&#xA;  Stream #0:0(eng): Video: h264 (H264 / 0x34363248), yuv420p(progressive), 384x264 [SAR 1:1 DAR 16:11], q=2-31, 30 fps, 30 tbn (default)&#xA;    Metadata:&#xA;      creation_time   : 2022-08-10T12:54:09.000000Z&#xA;      handler_name    : Mainconcept MP4 Video Media Handler&#xA;      vendor_id       : [0][0][0][0]&#xA;      encoder         : Lavc60.17.100 libx264&#xA;    Side data:&#xA;      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A&#xA;[out#0/avi @ 0000018c687f47c0] video:82kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 3.631060%&#xA;frame=  200 fps=0.0 q=-1.0 Lsize=      85kB time=00:00:06.56 bitrate= 106.5kbits/s speed=76.2x    &#xA;[libx264 @ 0000018c68c8b9c0] frame I:1     Avg QP:16.12  size:  3659&#xA;[libx264 @ 0000018c68c8b9c0] frame P:80    Avg QP:21.31  size:   647&#xA;[libx264 @ 0000018c68c8b9c0] frame B:119   Avg QP:26.74  size:   243&#xA;[libx264 @ 0000018c68c8b9c0] consecutive B-frames:  3.0% 53.0%  0.0% 44.0%&#xA;[libx264 @ 0000018c68c8b9c0] mb I  I16..4: 17.6% 70.6% 11.8%&#xA;[libx264 @ 0000018c68c8b9c0] mb P  I16..4:  0.8%  1.7%  0.6%  P16..4: 17.6%  4.6%  3.3%  0.0%  0.0%    skip:71.4%&#xA;[libx264 @ 0000018c68c8b9c0] mb B  I16..4:  0.1%  0.3%  0.2%  B16..8: 11.7%  1.4%  0.4%  direct: 0.6%  skip:85.4%  L0:32.0% L1:59.7% BI: 8.3%&#xA;[libx264 @ 0000018c68c8b9c0] 8x8 transform intra:59.6% inter:62.4%&#xA;[libx264 @ 0000018c68c8b9c0] coded y,uvDC,uvAC intra: 48.5% 0.0% 0.0% inter: 3.5% 0.0% 0.0%&#xA;[libx264 @ 0000018c68c8b9c0] i16 v,h,dc,p: 19% 39% 25% 17%&#xA;[libx264 @ 0000018c68c8b9c0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 21% 25% 30%  3%  3%  4%  4%  4%  5%&#xA;[libx264 @ 0000018c68c8b9c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 22% 20% 16%  6%  8%  8%  8%  5%  6%&#xA;[libx264 @ 0000018c68c8b9c0] i8c dc,h,v,p: 100%  0%  0%  0%&#xA;[libx264 @ 0000018c68c8b9c0] Weighted P-Frames: Y:0.0% UV:0.0%&#xA;[libx264 @ 0000018c68c8b9c0] ref P L0: 76.2%  7.9% 11.2%  4.7%&#xA;[libx264 @ 0000018c68c8b9c0] ref B L0: 85.6% 12.9%  1.5%&#xA;[libx264 @ 0000018c68c8b9c0] ref B L1: 97.7%  2.3%&#xA;[libx264 @ 0000018c68c8b9c0] kb/s:101.19&#xA;

    &#xA;

    So the question is : why does this happen, and how can one avoid it ?

    &#xA;

  • FFmpeg RTSP drop rate increases when frame rate is reduced

    13 avril 2024, par Avishka Perera

    I need to read an RTSP stream, process the images individually in Python, and then write the images back to an RTSP stream. As the RTSP server, I am using Mediamtx [1]. For streaming, I am using FFmpeg [2].

    &#xA;

    I have the following code that works perfectly fine. For simplification purposes, I am streaming three generated images.

    &#xA;

    import time&#xA;import numpy as np&#xA;import subprocess&#xA;&#xA;width, height = 640, 480&#xA;fps = 25&#xA;rtsp_server_address = f"rtsp://localhost:8554/mystream"&#xA;&#xA;ffmpeg_cmd = [&#xA;    "ffmpeg",&#xA;    "-re",&#xA;    "-f",&#xA;    "rawvideo",&#xA;    "-pix_fmt",&#xA;    "rgb24",&#xA;    "-s",&#xA;    f"{width}x{height}",&#xA;    "-i",&#xA;    "-",&#xA;    "-r",&#xA;    str(fps),&#xA;    "-avoid_negative_ts",&#xA;    "make_zero",&#xA;    "-vcodec",&#xA;    "libx264",&#xA;    "-threads",&#xA;    "4",&#xA;    "-f",&#xA;    "rtsp",&#xA;    rtsp_server_address,&#xA;]&#xA;colors = np.array(&#xA;    [&#xA;        [255, 0, 0],&#xA;        [0, 255, 0],&#xA;        [0, 0, 255],&#xA;    ]&#xA;).reshape(3, 1, 1, 3)&#xA;images = (np.ones((3, width, height, 3)) * colors).astype(np.uint8)&#xA;&#xA;if __name__ == "__main__":&#xA;&#xA;    process = subprocess.Popen(ffmpeg_cmd, stdin=subprocess.PIPE)&#xA;    start = time.time()&#xA;    exported = 0&#xA;    while True:&#xA;        exported &#x2B;= 1&#xA;        next_time = start &#x2B; exported / fps&#xA;        now = time.time()&#xA;        if next_time > now:&#xA;            sleep_dur = next_time - now&#xA;            time.sleep(sleep_dur)&#xA;&#xA;        image = images[exported % 3]&#xA;        image_bytes = image.tobytes()&#xA;&#xA;        process.stdin.write(image_bytes)&#xA;        process.stdin.flush()&#xA;&#xA;    process.stdin.close()&#xA;    process.wait()&#xA;

    &#xA;

    The issue is, that I need to run this at 10 fps because the processing step is heavy and can only afford 10 fps. Hence, as I reduce the frame rate from 25 to 10, the drop rate increases from 0% to 100%. And after a few iterations, I get a BrokenPipeError: [Errno 32] Broken pipe. Refer to the appendix for the complete log.

    &#xA;

    As an alternative, I can use OpenCV compiled from source with GStreamer [3], but I prefer using FFmpeg to make the shipping process simple. Since compiling OpenCV from source can be tedious and dependent on the system.

    &#xA;

    References

    &#xA;

    [1] Mediamtx (formerly rtsp-simple-server) : https://github.com/bluenviron/mediamtx

    &#xA;

    [2] FFmpeg : https://github.com/FFmpeg/FFmpeg

    &#xA;

    [3] Compile OpenCV with GStreamer : https://github.com/bluenviron/mediamtx?tab=readme-ov-file#opencv

    &#xA;

    Appendix

    &#xA;

    Creating the source stream

    &#xA;

    To instantiate the unprocessed stream, I use the following command. This streams the content of my webcam as and RTSP stream.

    &#xA;

    ffmpeg -video_size 1280x720 -i /dev/video0  -avoid_negative_ts make_zero -vcodec libx264 -r 10 -f rtsp rtsp://localhost:8554/webcam&#xA;

    &#xA;

    Error log

    &#xA;

    ffmpeg version 6.1.1 Copyright (c) 2000-2023 the FFmpeg developers&#xA;  built with gcc 12.3.0 (conda-forge gcc 12.3.0-5)&#xA;  configuration: --prefix=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_plac --cc=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-cc --cxx=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-c&#x2B;&#x2B; --nm=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-nm --ar=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-ar --disable-doc --disable-openssl --enable-demuxer=dash --enable-hardcoded-tables --enable-libfreetype --enable-libharfbuzz --enable-libfontconfig --enable-libopenh264 --enable-libdav1d --enable-gnutls --enable-libmp3lame --enable-libvpx --enable-libass --enable-pthreads --enable-vaapi --enable-libopenvino --enable-gpl --enable-libx264 --enable-libx265 --enable-libaom --enable-libsvtav1 --enable-libxml2 --enable-pic --enable-shared --disable-static --enable-version3 --enable-zlib --enable-libopus --pkg-config=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/pkg-config&#xA;  libavutil      58. 29.100 / 58. 29.100&#xA;  libavcodec     60. 31.102 / 60. 31.102&#xA;  libavformat    60. 16.100 / 60. 16.100&#xA;  libavdevice    60.  3.100 / 60.  3.100&#xA;  libavfilter     9. 12.100 /  9. 12.100&#xA;  libswscale      7.  5.100 /  7.  5.100&#xA;  libswresample   4. 12.100 /  4. 12.100&#xA;  libpostproc    57.  3.100 / 57.  3.100&#xA;Input #0, rawvideo, from &#x27;fd:&#x27;:&#xA;  Duration: N/A, start: 0.000000, bitrate: 184320 kb/s&#xA;  Stream #0:0: Video: rawvideo (RGB[24] / 0x18424752), rgb24, 640x480, 184320 kb/s, 25 tbr, 25 tbn&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))&#xA;[libx264 @ 0x5e2ef8b01340] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2&#xA;[libx264 @ 0x5e2ef8b01340] profile High 4:4:4 Predictive, level 2.2, 4:4:4, 8-bit&#xA;[libx264 @ 0x5e2ef8b01340] 264 - core 164 r3095 baee400 - H.264/MPEG-4 AVC codec - Copyleft 2003-2022 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=4 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=10 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00&#xA;Output #0, rtsp, to &#x27;rtsp://localhost:8554/mystream&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf60.16.100&#xA;  Stream #0:0: Video: h264, yuv444p(tv, progressive), 640x480, q=2-31, 10 fps, 90k tbn&#xA;    Metadata:&#xA;      encoder         : Lavc60.31.102 libx264&#xA;    Side data:&#xA;      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A&#xA;[vost#0:0/libx264 @ 0x5e2ef8b01080] Error submitting a packet to the muxer: Broken pipe   &#xA;[out#0/rtsp @ 0x5e2ef8afd780] Error muxing a packet&#xA;[out#0/rtsp @ 0x5e2ef8afd780] video:1kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown&#xA;frame=    1 fps=0.1 q=-1.0 Lsize=N/A time=00:00:04.70 bitrate=N/A dup=0 drop=70 speed=0.389x    &#xA;[libx264 @ 0x5e2ef8b01340] frame I:16    Avg QP: 6.00  size:   147&#xA;[libx264 @ 0x5e2ef8b01340] frame P:17    Avg QP: 9.94  size:   101&#xA;[libx264 @ 0x5e2ef8b01340] frame B:17    Avg QP: 9.94  size:    64&#xA;[libx264 @ 0x5e2ef8b01340] consecutive B-frames: 50.0%  0.0% 42.0%  8.0%&#xA;[libx264 @ 0x5e2ef8b01340] mb I  I16..4: 81.3% 18.7%  0.0%&#xA;[libx264 @ 0x5e2ef8b01340] mb P  I16..4: 52.9%  0.0%  0.0%  P16..4:  0.0%  0.0%  0.0%  0.0%  0.0%    skip:47.1%&#xA;[libx264 @ 0x5e2ef8b01340] mb B  I16..4:  0.0%  5.9%  0.0%  B16..8:  0.1%  0.0%  0.0%  direct: 0.0%  skip:94.0%  L0:56.2% L1:43.8% BI: 0.0%&#xA;[libx264 @ 0x5e2ef8b01340] 8x8 transform intra:15.4% inter:100.0%&#xA;[libx264 @ 0x5e2ef8b01340] coded y,u,v intra: 0.0% 0.0% 0.0% inter: 0.0% 0.0% 0.0%&#xA;[libx264 @ 0x5e2ef8b01340] i16 v,h,dc,p: 97%  0%  3%  0%&#xA;[libx264 @ 0x5e2ef8b01340] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu:  0%  0% 100%  0%  0%  0%  0%  0%  0%&#xA;[libx264 @ 0x5e2ef8b01340] Weighted P-Frames: Y:52.9% UV:52.9%&#xA;[libx264 @ 0x5e2ef8b01340] ref P L0: 88.9%  0.0%  0.0% 11.1%&#xA;[libx264 @ 0x5e2ef8b01340] kb/s:8.27&#xA;Conversion failed!&#xA;Traceback (most recent call last):&#xA;  File "/home/avishka/projects/read-process-stream/minimal-ffmpeg-error.py", line 58, in <module>&#xA;    process.stdin.write(image_bytes)&#xA;BrokenPipeError: [Errno 32] Broken pipe&#xA;</module>

    &#xA;