Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
FFmpeg RTSP stream to remote MediaMTX server disconnects after a few seconds [closed]
13 juin, par RorschyI'm new to RTSP and MediaMTX, and I'm trying to live stream my screen using FFmpeg and MediaMTX for a specific use case.
Everything works perfectly when both FFmpeg and MediaMTX run on the same machine. However, when I move MediaMTX to a remote server, the stream becomes unstable — I can't maintain a connection or view the stream reliably.
Here is the FFmpeg command I'm using from the client machine:
ffmpeg -f gdigrab -framerate 10 -offset_x 0 -offset_y 0 -video_size 1920x1080 -i desktop -f lavfi -i anullsrc -vcodec libx264 -tune zerolatency -g 30 -sc_threshold 0 -preset ultrafast -tune zerolatency -f rtsp rtsp:///live/stream
And here’s the relevant MediaMTX log output on the remote server:
2025/06/12 14:28:44 INF [RTSP] [conn :35798] opened 2025/06/12 14:28:44 INF [RTSP] [session 2e487869] created by :35798 2025/06/12 14:28:44 INF [RTSP] [session 2e487869] is publishing to path 'live/stream', 2 tracks (H264, MPEG-4 Audio) 2025/06/12 14:28:45 INF [WebRTC] [session 8a909818] created by :47296 2025/06/12 14:28:45 WAR [WebRTC] [session 8a909818] skipping track 2 (MPEG-4 Audio) 2025/06/12 14:28:47 INF [WebRTC] [session dd0d3af7] created by :46306 2025/06/12 14:28:47 WAR [WebRTC] [session dd0d3af7] skipping track 2 (MPEG-4 Audio) 2025/06/12 14:28:49 INF [WebRTC] [session 5f853024] created by :46320 2025/06/12 14:28:49 WAR [WebRTC] [session 5f853024] skipping track 2 (MPEG-4 Audio) 2025/06/12 14:28:51 INF [WebRTC] [session 3edba9a8] created by :46342 2025/06/12 14:28:51 WAR [WebRTC] [session 3edba9a8] skipping track 2 (MPEG-4 Audio) 2025/06/12 14:28:53 INF [WebRTC] [session 4be5bd9b] created by :46352 2025/06/12 14:28:53 WAR [WebRTC] [session 4be5bd9b] skipping track 2 (MPEG-4 Audio) 2025/06/12 14:28:54 INF [RTSP] [conn :35798] closed: terminated 2025/06/12 14:28:54 INF [RTSP] [session 2e487869] destroyed: session timed out 2025/06/12 14:28:54 INF [WebRTC] [session 8a909818] closed: terminated 2025/06/12 14:28:54 INF [WebRTC] [session 3edba9a8] closed: terminated 2025/06/12 14:28:54 INF [WebRTC] [session 5f853024] closed: terminated
My questions:
- What could be causing the RTSP stream to disconnect when streaming to a remote MediaMTX server?
- Are there any recommended network settings or MediaMTX configuration tweaks to ensure a stable stream over the internet?
Any help or guidance would be greatly appreciated. Thanks!
-
ffmpeg how to set max_num_reorder_frames H264
13 juin, par Vasil YordanovAnyone know how can I set max_num_reorder_frames to 0 when I am encoding H264 video ? You can find in the docs as
uint8_t H264RawVUI::bitstream_restriction_flag
PS. Based on the discussion in the comments. What I actually want to accomplish is to have all the frames written in the order in which they were encoded. My use-case is - I have 1000 images for example. I encode each one of them using the codec, but then when I investigate a little bit and check the actual packets in the H264 container, I see that I have cases when one frame is written twice (for example ... 1,2,3,3,4,5,6,7,7 ...) what I want is once I decode the the H264 container I want to get the same images which I encoded. Is that possible and how ?
P.P.S: I don't think the
g=1
works - giving some more code for reference. This is what I currently have:import numpy as np import ffmpeg, subprocess, av width, height, encoding_profile, pixel_format = 1280, 800, 'main', 'yuv420p' # here I create 256 frames where each one has unique pixels all zeros, ones, twos and etc. np_images = [] for i in range(256): np_image = i + np.zeros((height, width, 3), dtype=np.uint8) np_images.append(np_image) print(f'number of numpy images: {len(np_images)}') encoder = (ffmpeg .input('pipe:', format='rawvideo', pix_fmt='rgb24', s='{}x{}'.format(width, height)) .output('pipe:', format='H264', pix_fmt=pixel_format, vcodec='libx264', profile='main', g=1) .run_async(pipe_stdin=True, pipe_stdout=True) ) for timestamp, frame in enumerate(np_images): encoder.stdin.write( frame .astype(np.uint8) .tobytes() ) encoder.stdin.close() output = encoder.stdout.read() encoder.stdout.close() # here I decode the encoded frames using PyAV frame_decoder = av.CodecContext.create("h264", "r") frame_decoder.thread_count = 0 frame_decoder.thread_type = 'NONE' packets = frame_decoder.parse(output) decoded_frames = [] for packet in packets: frame = frame_decoder.decode(packet) decoded_frames.extend(frame) decoded_frames.extend(frame_decoder.decode()) print(f'number of decoded frames: {len(decoded_frames)}') print('keyframe boolean mask') print([e.key_frame for e in decoded_frames]) decoded_np_images = [] for frame in decoded_frames: decoded_np_images.append(np.array(frame.to_image())) print(f'number of decoded numpy images: {len(decoded_np_images)}') # here I check what the decoded frames contain (all zeros, ones, twos and etc.) print([e[0,0,0].item() for e in decoded_np_images])
the particular problem which I am facing is that in the output you can observe this:
number of decoded numpy images: 255
[0, 1, 2, 3, 3, 4, 5, 6, 8, 9, 10, 10, 11, 12, 13, 15, 16, 17, 17, 18, 19, 20, 22, 23, 24, 24, 25, 26, 27, 29, 30, 31, 31, 32, 33, 34, 36, 37, 38, 39, 39, 40, 41, 43, 44, 45, 46, 46, 47, 48, 50, 51, 52, 53, 53, 54, 55, 57, 58, 59, 60, 60, 61, 62, 64, 65, 66, 67, 67, 68, 69, 71, 72, 73, 74, 74, 75, 76, 78, 79, 80, 81, 81, 82, 83, 85, 86, 87, 88, 88, 89, 90, 91, 93, 94, 95, 95, 96, 97, 98, 100, 101, 102, 102, 103, 104, 105, 107, 108, 109, 109, 110, 111, 112, 114, 115, 116, 116, 117, 118, 119, 121, 122, 123, 123, 124, 125, 126, 128, 129, 130, 131, 131, 132, 133, 135, 136, 137, 138, 138, 139, 140, 142, 143, 144, 145, 145, 146, 147, 149, 150, 151, 152, 152, 153, 154, 156, 157, 158, 159, 159, 160, 161, 163, 164, 165, 166, 166, 167, 168, 170, 171, 172, 173, 173, 174, 175, 176, 178, 179, 180, 180, 181, 182, 183, 185, 186, 187, 187, 188, 189, 190, 192, 193, 194, 194, 195, 196, 197, 199, 200, 201, 201, 202, 203, 204, 206, 207, 208, 208, 209, 210, 211, 213, 214, 215, 216, 216, 217, 218, 220, 221, 222, 223, 223, 224, 225, 227, 228, 229, 230, 230, 231, 232, 234, 235, 236, 237, 237, 238, 239, 241, 242, 243, 244, 244, 245, 246, 248, 249, 250, 251, 251, 252, 253]
I still have frames which are appearing twice (and respectively some are missing)
-
How to convert an MJPEG stream to YUV420p, then hardware encode to h264 on rpi4 using go2rtc and frigate ? [closed]
13 juin, par Josh PirihiI am putting together a dashcam/dvr/reversing camera system for my van. I am using some analogue HD reversing cameras and AHD to USB dongles along with a Raspberry Pi 4. The pi is running frigate in docker, it has a fresh Raspberry Pi OS installed. The AHD dongles show up straight away as /dev/video0 when plugged in.
I am running into an issue getting the MJPEG stream from the dongle to be accepted by the hardware h264 encoder. I am able to feed the hardware encoder with the raw YUYV 4:2:2 stream, however due to bandwidth limitations this cuts the framerate intolerably low (720p 10fps, 1080p 5fps). Similarly, I am able to use the software encoder to convert the MJPEG stream at 30fps, however this uses 200% CPU per camera so it is no good for when I add more than one camera (at least 2 in total, maybe more).
I have played around with frigate, and have reduced it back to just the go2rtc docker container to troubleshoot until I get it working.
Here is the output from the go2rtc FFMPEG Devices (USB) tab:
The basic go2rtc config gives me 10fps 720p using the hardware encoder. This ingests the raw stream I think:
streams: grill: - "ffmpeg:device?video=0&video_size=1280x720#video=h264#hardware"
Telling it to use MJPEG results in an error:
streams: grill: - "ffmpeg:device?video=0&input_format=mjpeg&video_size=1280x720#video=h264#hardware"
go2rtc-1 | 19:34:14.379 WRN [rtsp] error="streams: exec/rtsp\n[h264_v4l2m2m @ 0x7facadfb40] Encoder requires yuv420p pixel format.\n[vost#0:0/h264_v4l2m2m @ 0x7faf9aa3a0] Error while opening encoder - maybe incorrect parameters such as bit_rate, rate, width or height.\nError while filtering: Invalid argument\n[out#0/rtsp @ 0x7fafff7ec0] Nothing was written into output file, because at least one of its streams received no packets.\n" stream=grill
I tried splitting it into steps to ingest the MJPEG, then convert the pixel format, then encode to h264, however this results in the same error. The _mjpeg feeds both work, but the final encoded feed has the same encoder error:
streams: grill_mjpeg: "ffmpeg:device?video=/dev/video0&input_format=mjpeg&video_size=1920x1080" grill_mjpeg_yuv: exec:ffmpeg -i http://localhost:1984/api/stream.mjpeg?src=grill_mjpeg -pix_fmt yuv420p -c:v copy -rtsp_transport tcp -f rtsp {output} grill: ffmpeg:http://localhost:1984/api/stream.mjpeg?src=grill_mjpeg_yuv#video=h264#hardware
go2rtc-1 | 19:39:07.871 WRN [rtsp] error="streams: exec/rtsp\n[h264_v4l2m2m @ 0x7f7f1aca70] Encoder requires yuv420p pixel format.\n[vost#0:0/h264_v4l2m2m @ 0x7f820f83b0] Error while opening encoder - maybe incorrect parameters such as bit_rate, rate, width or height.\nError while filtering: Invalid argument\n[out#0/rtsp @ 0x7f82745ec0] Nothing was written into output file, because at least one of its streams received no packets.\n" stream=grill
If I change grill_mjpeg_yuv to "-c:v mjpeg" instead of copy, it pegs one of the CPU cores at 100% and this stream will not output anything.
Can anyone offer any tips?
As a small side consideration, having an intermediate MJPEG feed available would be helpful for displaying the reversing camera on a monitor in the van with the lowest latency possible, however I want h264 streams for recording and viewing via the van's 4G connection.
-
How to install ffmpeg for ubuntu using command line ?
12 juin, par Cheng Jaycee JiangA lit background... This is a piece of code in my Dockerfile. I want to deploy my app to google app engine. Somehow I couldn't install ffmpeg.
ENV VIRTUAL_ENV /env ENV PATH /env/bin:$PATH RUN apt-get install ffmpeg
This is error log:
E: Unable to locate package ffmpeg The command '/bin/sh -c apt-get install ffmpeg' returned a non-zero code: 100 ERROR ERROR: build step "gcr.io/cloud-builders/docker@sha256:ef2e6744a171cfb0e8a0ef27f9b9a34970341bfc0c3d401afdeedca72292cf73" failed: exit status 100
I found this but it didn't work for me. It complained about add-apt-repository is not valid command. https://askubuntu.com/questions/691109/how-do-i-install-ffmpeg-and-codecs
Anyone can help me with this? Thanks!!!
-
ffmpeg combine images to video and specify duration for each image ? [duplicate]
11 juin, par hanshenrikCan ffmpeg combine images to a video with a specific duration for each image? For example, i want 1.jpg to be displayed for 100 ms, and 2.jpg to be displayed for 120 ms, and 3.jpg to be displayed for 115 ms, and so on.. can ffmpeg do this?
I have a bunch of images, each image has their own timestamp, and the timestamps decides how long each image should be in the video..
This is similar to question Conversion of images to video with variable fps using FFmpeg , but unlike that post, I do not need a gradual increase, but I rather need to specify the duration of each image.