
Recherche avancée
Médias (91)
-
Spoon - Revenge !
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
My Morning Jacket - One Big Holiday
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Zap Mama - Wadidyusay ?
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
David Byrne - My Fair Lady
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Beastie Boys - Now Get Busy
15 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Granite de l’Aber Ildut
9 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
Autres articles (80)
-
Le plugin : Gestion de la mutualisation
2 mars 2010, parLe plugin de Gestion de mutualisation permet de gérer les différents canaux de mediaspip depuis un site maître. Il a pour but de fournir une solution pure SPIP afin de remplacer cette ancienne solution.
Installation basique
On installe les fichiers de SPIP sur le serveur.
On ajoute ensuite le plugin "mutualisation" à la racine du site comme décrit ici.
On customise le fichier mes_options.php central comme on le souhaite. Voilà pour l’exemple celui de la plateforme mediaspip.net :
< ?php (...) -
Les vidéos
21 avril 2011, parComme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)
Sur d’autres sites (8623)
-
pyqt5 gui dependent on ffmpeg compiled with pyinstaller doesn't run on other machines ?
19 octobre 2022, par SorenI am trying to create a simple Pyqt5 GUI for Windows 10 that uses OpenAI's model Whisper to transcribe a sound file and outputting the results in an Excel-file. It works on my own computer where I have installed the necessary dependencies for Whisper as stated on their github i.e. FFMEG. I provide a minimal example of my code below :


# Import library
import whisper
import os
from PyQt5 import QtCore, QtGui, QtWidgets
import pandas as pd
import xlsxwriter


class Ui_Dialog(QtWidgets.QDialog):
 
 
 # Define functions to use in GUI
 
 # Define function for selecting input files
 def browsefiles(self, Dialog):
 
 
 # Make Dialog box and save files into tuple of paths
 files = QtWidgets.QFileDialog().getOpenFileNames(self, "Select soundfiles", os.getcwd(), "lyd(*mp2 *.mp3 *.mp4 *.m4a *wma *wav)")
 
 self.liste = []
 for url in range(len(files[0])):
 self.liste.append(files[0][url]) 

 
 def model_load(self, Dialog):
 
 # Load picked model
 self.model = whisper.load_model(r'C:\Users\Søren\Downloads\Whisper_gui\models' + "\\" + self.combo_modelSize.currentText() + ".pt") ##the path is set to where the models are on the other machine
 
 
 def run(self, Dialog):
 
 # Make list for sound files
 liste_df = []
 
 
 # Running loop for interpreting and encoding sound files
 for url in range(len(self.liste)):
 
 # Make dataframe
 df = pd.DataFrame(columns=["filename", "start", "end", "text"])
 
 # Run model
 result = self.model.transcribe(self.liste[url])
 
 # Extract results
 for i in range(len(result["segments"])):
 start = result["segments"][i]["start"]
 end = result["segments"][i]["end"]
 text = result["segments"][i]["text"]
 
 df = df.append({"filename": self.liste[url].split("/")[-1],
 "start": start, 
 "end": end, 
 "text": text}, ignore_index=True)
 
 # Add detected language to dataframe
 df["sprog"] = result["language"]
 
 
 liste_df.append(df)
 
 
 
 # Make excel output
 
 # Concatenate list of dfs
 dataframe = pd.concat(liste_df)
 
 
 # Create a Pandas Excel writer using XlsxWriter as the engine.
 writer = pd.ExcelWriter(self.liste[0].split(".")[0] + '_OUTPUT.xlsx', engine='xlsxwriter')
 writer_wrap_format = writer.book.add_format({"text_wrap": True, 'num_format': '@'})


 # Write the dataframe data to XlsxWriter. Turn off the default header and
 # index and skip one row to allow us to insert a user defined header.
 dataframe.to_excel(writer, sheet_name="Output", startrow=1, header=False, index=False)

 # Get the xlsxwriter workbook and worksheet objects.
 #workbook = writer.book
 worksheet = writer.sheets["Output"]

 # Get the dimensions of the dataframe.
 (max_row, max_col) = dataframe.shape

 # Create a list of column headers, to use in add_table().
 column_settings = [{'header': column} for column in dataframe.columns]

 # Add the Excel table structure. Pandas will add the data.
 worksheet.add_table(0, 0, max_row, max_col - 1, {'columns': column_settings})

 # Make the columns wider for clarity.
 worksheet.set_column(0, max_col - 1, 12)
 
 in_col_no = xlsxwriter.utility.xl_col_to_name(dataframe.columns.get_loc("text"))
 
 worksheet.set_column(in_col_no + ":" + in_col_no, 30, writer_wrap_format)

 # Close the Pandas Excel writer and output the Excel file.
 writer.save()
 writer.close()
 
 
 ## Design setup
 
 def setupUi(self, Dialog):
 Dialog.setObjectName("Dialog")
 Dialog.resize(730, 400)
 
 self.select_files = QtWidgets.QPushButton(Dialog)
 self.select_files.setGeometry(QtCore.QRect(40, 62, 81, 31))
 font = QtGui.QFont()
 font.setPointSize(6)
 self.select_files.setFont(font)
 self.select_files.setObjectName("select_files")
 
 
 
 
 self.combo_modelSize = QtWidgets.QComboBox(Dialog)
 self.combo_modelSize.setGeometry(QtCore.QRect(40, 131, 100, 21))
 font = QtGui.QFont()
 font.setPointSize(6)
 self.combo_modelSize.setFont(font)
 self.combo_modelSize.setObjectName("combo_modelSize")
 
 
 self.runButton = QtWidgets.QPushButton(Dialog)
 self.runButton.setGeometry(QtCore.QRect(40, 289, 71, 21))
 font = QtGui.QFont()
 font.setPointSize(6)
 self.runButton.setFont(font)
 self.runButton.setObjectName("runButton")
 
 
 

 self.retranslateUi(Dialog)
 QtCore.QMetaObject.connectSlotsByName(Dialog)
 
 
 
 modelSize_options = ['Chose model', 'tiny', 'base', 'small', 'medium', 'large']
 self.combo_modelSize.addItems(modelSize_options)
 
 # Do an action!
 self.select_files.clicked.connect(self.browsefiles)
 self.combo_modelSize.currentIndexChanged.connect(self.model_load)
 self.runButton.clicked.connect(self.run)
 
 
 
 

 def retranslateUi(self, Dialog):
 _translate = QtCore.QCoreApplication.translate
 Dialog.setWindowTitle(_translate("Dialog", "Dialog"))
 self.runButton.setText(_translate("Dialog", "Go!"))
 self.select_files.setText(_translate("Dialog", "Select"))


if __name__ == "__main__":
 import sys
 app = QtWidgets.QApplication(sys.argv)
 Dialog = QtWidgets.QDialog()
 ui = Ui_Dialog()
 ui.setupUi(Dialog)
 Dialog.show()
 sys.exit(app.exec_())



I compile this app with pyinstaller using the following code. I had some issues to begin with so I found other with similar problems and ended up with this :


pyinstaller --onedir --hidden-import=pytorch --collect-data torch --copy-metadata torch --copy-metadata tqdm --copy-metadata tokenizers --copy-metadata importlib_metadata --hidden-import="sklearn.utils._cython_blas" --hidden-import="sklearn.neighbors.typedefs" --hidden-import="sklearn.neighbors.quad_tree" --hidden-import="sklearn.tree" --hidden-import="sklearn.tree._utils" --copy-metadata regex --copy-metadata requests --copy-metadata packaging --copy-metadata filelock --copy-metadata numpy --add-data "./ffmpeg/*;./ffmpeg/" --hidden-import=whisper --copy-metadata whisper --collect-data whisper minimal_example_whisper.py


When I take the outputtet dist directory and try to run the app on another Windows machine without FFMPEG installed (or Whisper or any other things), I get the following error from the terminal as I push the "run" button in the app (otherwise the app does run).


C:\Users\Søren>"G:\minimal_example_whisper\minimal_example_whisper.exe"
whisper\transcribe.py:70: UserWarning: FP16 is not supported on CPU; using FP32 instead
Traceback (most recent call last):
 File "minimal_example_whisper.py", line 45, in run
 File "whisper\transcribe.py", line 76, in transcribe
 File "whisper\audio.py", line 111, in log_mel_spectrogram
 File "whisper\audio.py", line 42, in load_audio
 File "ffmpeg\_run.py", line 313, in run
 File "ffmpeg\_run.py", line 284, in run_async
 File "subprocess.py", line 951, in __init__
 File "subprocess.py", line 1420, in _execute_child
FileNotFoundError: [WinError 2] Den angivne fil blev ikke fundet



I suspect this has something to do with FFMPEG not being installed on the other machines system ? Does anyone have an automatic solution for this when compiling the app or can it simply only run on machines that has FFMPEG installed ?


Thanks in advance !


-
ffmpeg piped output producing incorrect metadata frame count
8 décembre 2024, par XorgonThe short version : Using piped output from ffmpeg produces a file with incorrect metadata.


ffmpeg -y -i .\test_mp4.mp4 -f avi -c:v libx264 - > output.avi
to make an AVI file using the pipe output.

ffprobe -v error -count_frames -show_entries stream=duration,nb_read_frames,r_frame_rate .\output.avi


The output will show that the metadata does not match the actual frames contained in the video.


Details below.



Using Python, I am attempting to use ffmpeg to compress videos and put them in a PowerPoint. This works great, however, the video files themselves have incorrect frame counts which can cause issues when I read from those videos in other code.


Edit for clarification : by "frame count" I mean the metadata frame count. The actual number of frames contained in the video is correct, but querying the metadata gives an incorrect frame count.


Having eliminated the PowerPoint aspect of the code, I've narrowed this down to the following minimal reproducing example of saving an output from an ffmpeg pipe :


from subprocess import Popen, PIPE

video_path = 'test_mp4.mp4'

ffmpeg_pipe = Popen(['ffmpeg',
 '-y', # Overwrite files
 '-i', f'{video_path}', # Input from file
 '-f', 'avi', # Output format
 '-c:v', 'libx264', # Codec
 '-'], # Output to pipe
 stdout=PIPE)

new_path = "piped_video.avi"
vid_file = open(new_path, "wb")
vid_file.write(ffmpeg_pipe.stdout.read())
vid_file.close()



I've tested several different videos. One small example video that I've tested can be found here.


I've tried a few different codecs with
avi
format and triedlibvpx
withwebm
format. For theavi
outputs, the frame count usually reads as1073741824
(2^30). Weirdly, for thewebm
format, the frame count read as-276701161105643264
.

Edit : This issue can also be reproduced with just ffmpeg in command prompt using the following command :

ffmpeg -y -i .\test_mp4.mp4 -f avi -c:v libx264 - > output.avi


This is a snippet I used to read the frame count, but one could also see the error by opening the video details in Windows Explorer and seeing the total time as something like 9942 hours, 3 minutes, and 14 seconds.


import cv2

video_path = 'test_mp4.mp4'
new_path = "piped_video.webm"

cap = cv2.VideoCapture(video_path)
print(f"Original video frame count: = {int(cap.get(cv2.CAP_PROP_FRAME_COUNT)):d}")
cap.release()

cap = cv2.VideoCapture(new_path)
print(f"Piped video frame count: = {int(cap.get(cv2.CAP_PROP_FRAME_COUNT)):d}")
cap.release()



The error can also be observed using
ffprobe
with the following command :ffprobe -v error -count_frames -show_entries stream=duration,nb_read_frames,r_frame_rate .\output.avi
. Note that the frame rate and number of frames counted by ffprobe do not match with the duration from the metadata.

For completeness, here is the ffmpeg output :


ffmpeg version 2023-06-11-git-09621fd7d9-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers
 built with gcc 12.2.0 (Rev10, Built by MSYS2 project)
 configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint
 libavutil 58. 13.100 / 58. 13.100
 libavcodec 60. 17.100 / 60. 17.100
 libavformat 60. 6.100 / 60. 6.100
 libavdevice 60. 2.100 / 60. 2.100
 libavfilter 9. 8.101 / 9. 8.101
 libswscale 7. 3.100 / 7. 3.100
 libswresample 4. 11.100 / 4. 11.100
 libpostproc 57. 2.100 / 57. 2.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test_mp4.mp4':
 Metadata:
 major_brand : mp42
 minor_version : 0
 compatible_brands: isommp42
 creation_time : 2022-08-10T12:54:09.000000Z
 Duration: 00:00:06.67, start: 0.000000, bitrate: 567 kb/s
 Stream #0:0[0x1](eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 384x264 [SAR 1:1 DAR 16:11], 563 kb/s, 30 fps, 30 tbr, 30k tbn (default)
 Metadata:
 creation_time : 2022-08-10T12:54:09.000000Z
 handler_name : Mainconcept MP4 Video Media Handler
 vendor_id : [0][0][0][0]
 encoder : AVC Coding
Stream mapping:
 Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[libx264 @ 0000018c68c8b9c0] using SAR=1/1
[libx264 @ 0000018c68c8b9c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0000018c68c8b9c0] profile High, level 2.1, 4:2:0, 8-bit
Output #0, avi, to 'pipe:':
 Metadata:
 major_brand : mp42
 minor_version : 0
 compatible_brands: isommp42
 ISFT : Lavf60.6.100
 Stream #0:0(eng): Video: h264 (H264 / 0x34363248), yuv420p(progressive), 384x264 [SAR 1:1 DAR 16:11], q=2-31, 30 fps, 30 tbn (default)
 Metadata:
 creation_time : 2022-08-10T12:54:09.000000Z
 handler_name : Mainconcept MP4 Video Media Handler
 vendor_id : [0][0][0][0]
 encoder : Lavc60.17.100 libx264
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
[out#0/avi @ 0000018c687f47c0] video:82kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 3.631060%
frame= 200 fps=0.0 q=-1.0 Lsize= 85kB time=00:00:06.56 bitrate= 106.5kbits/s speed=76.2x 
[libx264 @ 0000018c68c8b9c0] frame I:1 Avg QP:16.12 size: 3659
[libx264 @ 0000018c68c8b9c0] frame P:80 Avg QP:21.31 size: 647
[libx264 @ 0000018c68c8b9c0] frame B:119 Avg QP:26.74 size: 243
[libx264 @ 0000018c68c8b9c0] consecutive B-frames: 3.0% 53.0% 0.0% 44.0%
[libx264 @ 0000018c68c8b9c0] mb I I16..4: 17.6% 70.6% 11.8%
[libx264 @ 0000018c68c8b9c0] mb P I16..4: 0.8% 1.7% 0.6% P16..4: 17.6% 4.6% 3.3% 0.0% 0.0% skip:71.4%
[libx264 @ 0000018c68c8b9c0] mb B I16..4: 0.1% 0.3% 0.2% B16..8: 11.7% 1.4% 0.4% direct: 0.6% skip:85.4% L0:32.0% L1:59.7% BI: 8.3%
[libx264 @ 0000018c68c8b9c0] 8x8 transform intra:59.6% inter:62.4%
[libx264 @ 0000018c68c8b9c0] coded y,uvDC,uvAC intra: 48.5% 0.0% 0.0% inter: 3.5% 0.0% 0.0%
[libx264 @ 0000018c68c8b9c0] i16 v,h,dc,p: 19% 39% 25% 17%
[libx264 @ 0000018c68c8b9c0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 21% 25% 30% 3% 3% 4% 4% 4% 5%
[libx264 @ 0000018c68c8b9c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 22% 20% 16% 6% 8% 8% 8% 5% 6%
[libx264 @ 0000018c68c8b9c0] i8c dc,h,v,p: 100% 0% 0% 0%
[libx264 @ 0000018c68c8b9c0] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0000018c68c8b9c0] ref P L0: 76.2% 7.9% 11.2% 4.7%
[libx264 @ 0000018c68c8b9c0] ref B L0: 85.6% 12.9% 1.5%
[libx264 @ 0000018c68c8b9c0] ref B L1: 97.7% 2.3%
[libx264 @ 0000018c68c8b9c0] kb/s:101.19



So the question is : why does this happen, and how can one avoid it ?


-
FFmpeg RTSP drop rate increases when frame rate is reduced
13 avril 2024, par Avishka PereraI need to read an RTSP stream, process the images individually in Python, and then write the images back to an RTSP stream. As the RTSP server, I am using Mediamtx [1]. For streaming, I am using FFmpeg [2].


I have the following code that works perfectly fine. For simplification purposes, I am streaming three generated images.


import time
import numpy as np
import subprocess

width, height = 640, 480
fps = 25
rtsp_server_address = f"rtsp://localhost:8554/mystream"

ffmpeg_cmd = [
 "ffmpeg",
 "-re",
 "-f",
 "rawvideo",
 "-pix_fmt",
 "rgb24",
 "-s",
 f"{width}x{height}",
 "-i",
 "-",
 "-r",
 str(fps),
 "-avoid_negative_ts",
 "make_zero",
 "-vcodec",
 "libx264",
 "-threads",
 "4",
 "-f",
 "rtsp",
 rtsp_server_address,
]
colors = np.array(
 [
 [255, 0, 0],
 [0, 255, 0],
 [0, 0, 255],
 ]
).reshape(3, 1, 1, 3)
images = (np.ones((3, width, height, 3)) * colors).astype(np.uint8)

if __name__ == "__main__":

 process = subprocess.Popen(ffmpeg_cmd, stdin=subprocess.PIPE)
 start = time.time()
 exported = 0
 while True:
 exported += 1
 next_time = start + exported / fps
 now = time.time()
 if next_time > now:
 sleep_dur = next_time - now
 time.sleep(sleep_dur)

 image = images[exported % 3]
 image_bytes = image.tobytes()

 process.stdin.write(image_bytes)
 process.stdin.flush()

 process.stdin.close()
 process.wait()



The issue is, that I need to run this at 10 fps because the processing step is heavy and can only afford 10 fps. Hence, as I reduce the frame rate from 25 to 10, the drop rate increases from 0% to 100%. And after a few iterations, I get a
BrokenPipeError: [Errno 32] Broken pipe
. Refer to the appendix for the complete log.

As an alternative, I can use OpenCV compiled from source with GStreamer [3], but I prefer using FFmpeg to make the shipping process simple. Since compiling OpenCV from source can be tedious and dependent on the system.


References


[1] Mediamtx (formerly rtsp-simple-server) : https://github.com/bluenviron/mediamtx


[2] FFmpeg : https://github.com/FFmpeg/FFmpeg


[3] Compile OpenCV with GStreamer : https://github.com/bluenviron/mediamtx?tab=readme-ov-file#opencv


Appendix


Creating the source stream


To instantiate the unprocessed stream, I use the following command. This streams the content of my webcam as and RTSP stream.


ffmpeg -video_size 1280x720 -i /dev/video0 -avoid_negative_ts make_zero -vcodec libx264 -r 10 -f rtsp rtsp://localhost:8554/webcam



Error log


ffmpeg version 6.1.1 Copyright (c) 2000-2023 the FFmpeg developers
 built with gcc 12.3.0 (conda-forge gcc 12.3.0-5)
 configuration: --prefix=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_plac --cc=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-cc --cxx=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-c++ --nm=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-nm --ar=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-ar --disable-doc --disable-openssl --enable-demuxer=dash --enable-hardcoded-tables --enable-libfreetype --enable-libharfbuzz --enable-libfontconfig --enable-libopenh264 --enable-libdav1d --enable-gnutls --enable-libmp3lame --enable-libvpx --enable-libass --enable-pthreads --enable-vaapi --enable-libopenvino --enable-gpl --enable-libx264 --enable-libx265 --enable-libaom --enable-libsvtav1 --enable-libxml2 --enable-pic --enable-shared --disable-static --enable-version3 --enable-zlib --enable-libopus --pkg-config=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/pkg-config
 libavutil 58. 29.100 / 58. 29.100
 libavcodec 60. 31.102 / 60. 31.102
 libavformat 60. 16.100 / 60. 16.100
 libavdevice 60. 3.100 / 60. 3.100
 libavfilter 9. 12.100 / 9. 12.100
 libswscale 7. 5.100 / 7. 5.100
 libswresample 4. 12.100 / 4. 12.100
 libpostproc 57. 3.100 / 57. 3.100
Input #0, rawvideo, from 'fd:':
 Duration: N/A, start: 0.000000, bitrate: 184320 kb/s
 Stream #0:0: Video: rawvideo (RGB[24] / 0x18424752), rgb24, 640x480, 184320 kb/s, 25 tbr, 25 tbn
Stream mapping:
 Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
[libx264 @ 0x5e2ef8b01340] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x5e2ef8b01340] profile High 4:4:4 Predictive, level 2.2, 4:4:4, 8-bit
[libx264 @ 0x5e2ef8b01340] 264 - core 164 r3095 baee400 - H.264/MPEG-4 AVC codec - Copyleft 2003-2022 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=4 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=10 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, rtsp, to 'rtsp://localhost:8554/mystream':
 Metadata:
 encoder : Lavf60.16.100
 Stream #0:0: Video: h264, yuv444p(tv, progressive), 640x480, q=2-31, 10 fps, 90k tbn
 Metadata:
 encoder : Lavc60.31.102 libx264
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
[vost#0:0/libx264 @ 0x5e2ef8b01080] Error submitting a packet to the muxer: Broken pipe 
[out#0/rtsp @ 0x5e2ef8afd780] Error muxing a packet
[out#0/rtsp @ 0x5e2ef8afd780] video:1kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
frame= 1 fps=0.1 q=-1.0 Lsize=N/A time=00:00:04.70 bitrate=N/A dup=0 drop=70 speed=0.389x 
[libx264 @ 0x5e2ef8b01340] frame I:16 Avg QP: 6.00 size: 147
[libx264 @ 0x5e2ef8b01340] frame P:17 Avg QP: 9.94 size: 101
[libx264 @ 0x5e2ef8b01340] frame B:17 Avg QP: 9.94 size: 64
[libx264 @ 0x5e2ef8b01340] consecutive B-frames: 50.0% 0.0% 42.0% 8.0%
[libx264 @ 0x5e2ef8b01340] mb I I16..4: 81.3% 18.7% 0.0%
[libx264 @ 0x5e2ef8b01340] mb P I16..4: 52.9% 0.0% 0.0% P16..4: 0.0% 0.0% 0.0% 0.0% 0.0% skip:47.1%
[libx264 @ 0x5e2ef8b01340] mb B I16..4: 0.0% 5.9% 0.0% B16..8: 0.1% 0.0% 0.0% direct: 0.0% skip:94.0% L0:56.2% L1:43.8% BI: 0.0%
[libx264 @ 0x5e2ef8b01340] 8x8 transform intra:15.4% inter:100.0%
[libx264 @ 0x5e2ef8b01340] coded y,u,v intra: 0.0% 0.0% 0.0% inter: 0.0% 0.0% 0.0%
[libx264 @ 0x5e2ef8b01340] i16 v,h,dc,p: 97% 0% 3% 0%
[libx264 @ 0x5e2ef8b01340] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 0% 0% 100% 0% 0% 0% 0% 0% 0%
[libx264 @ 0x5e2ef8b01340] Weighted P-Frames: Y:52.9% UV:52.9%
[libx264 @ 0x5e2ef8b01340] ref P L0: 88.9% 0.0% 0.0% 11.1%
[libx264 @ 0x5e2ef8b01340] kb/s:8.27
Conversion failed!
Traceback (most recent call last):
 File "/home/avishka/projects/read-process-stream/minimal-ffmpeg-error.py", line 58, in <module>
 process.stdin.write(image_bytes)
BrokenPipeError: [Errno 32] Broken pipe
</module>