Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
FFMPEG SRT multiple callers in to one listener ?
24 mars, par IanI am using this line:
exec_push /home/production/bin/ffmpeg -i rtmp://localhost:1935/live/slot4 -codec copy -g 1 -bsf:v h264_mp4toannexb -f mpegts srt://0.0.0.0:50330?mode=listener -loglevel verbose;
in nginx to launch FFMPEG and have it transmux RTMP to SRT. That being said, i'm curious if there is a flag or a way to have multiple SRT callers call into this one stream. If not, can you provide and alternative solution? -
encoding a video to AV1 for compression
24 mars, par living beingI wrote a code to encode a 1080p video to AV1 codec to reduce the file size. At the same time, I encoded the same video to x265. As they say, AV1 should reduce the size around 40-50% more than x265. But I didn't achieve it.
The original file size: 74 MB
the file encoded with AV1: 53 MB
The file encoded with x265: 34 MB
My AV1 code:
ffmpeg -i "input.webm" -vcodec libsvtav1 -preset 4 -crf 38 -acodec libopus -ac 1 -b:a 24K "output.mkv";
My x265 code:
ffmpeg -i "input.webm" -vcodec libx265 -preset fast -crf 31 -acodec libopus -ac 1 -b:a 24K "output.mkv";
I used a higher CRF for AV1 code to get a smaller file, but it didn't work. What's wrong with my AV1 code?
I used different input files and got similar results. My OS: Ubuntu 24.10
-
How to Pause and Resume Screen Recording in FFmpeg on Windows ?
24 mars, par Iman SajadpurI use FFmpeg on Windows to record my screen. I want to pause and resume the recording properly. I know that pressing
Ctrl + S
, Pause key on the Keyboard, or suspending FFmpeg via Resource Monitor stops the process, but screen recording conntinues in the background. Here is an example of the command I use for screen recording:ffmpeg -f gdigrab -probesize 100M -i desktop -f dshow -channel_layout stereo -i audio="Microphone (2- High Definition Audio Device)" output.mp4
How can I pause recording completely so that no frames are captured during the pause and resume it seamlessly?
-
Assigning of dts values to encoded packets
24 mars, par AlexI have a dump of H264-encoded data, which I need to put in mp4 container. I verified the validity of the encoded data by using mp4box utility against it. The mp4 file created by mp4box contained a proper 17 seconds long video. It is interesting that if I try ffmpeg to achieve the same, the resulting video is 34 seconds long and rather crappy (probably ffmpeg tries to decode video and then encode it, which results in the loss of video quality?) Anyway, for my project I can't use command line approach and need to come up wit a programmatic way to embed the data in the mp4 container.
Below is the code I use (I removed error checking for brevity. During execution all the calls succeed):
AVFormatContext* pInputFormatContext = avformat_alloc_context(); avformat_open_input(&pInputFormatContext, "Data.264", NULL, NULL); avformat_find_stream_info(pInputFormatContext, NULL); AVRational* pTime_base = &pInputFormatContext->streams[0]->time_base; int nFrameRate = pInputFormatContext->streams[0]->r_frame_rate.num / pFormatCtx->streams[0]->r_frame_rate.den; int nWidth = pInputFormatContext->streams[0]->codecpar->width; int nHeight = pInputFormatContext->streams[0]->codecpar->height; // nWidth = 1920, nHeight = 1080, nFrameRate = 25 // Create output objects AVFormatContext* pOutputFormatContext = NULL; avformat_alloc_output_context2(&pOutputFormatContext, NULL, NULL, "Destination.mp4"); AVCodec* pVideoCodec = avcodec_find_encoder(pOutputFormatContext->oformat->video_codec/*AV_CODEC_ID_264*/); AVStream* pOutputStream = avformat_new_stream(pOutputFormatContext, NULL); pOutputStream->id = pOutputFormatContext->nb_streams - 1; AVCodecContext* pCodecContext = avcodec_alloc_context3(pVideoCodec); switch (pVideoCodec->type) { case AVMEDIA_TYPE_VIDEO: pCodecContext->codec_id = codec_id; pCodecContext->bit_rate = 400000; /* Resolution must be a multiple of two. */ pCodecContext->width = nFrameWidth; pCodecContext->height = nFrameHeight; /* timebase: This is the fundamental unit of time (in seconds) in terms * of which frame timestamps are represented. For fixed-fps content, * timebase should be 1/framerate and timestamp increments should be * identical to 1. */ pOutputStream->time_base.num = 1; pOutputStream->time_base.den = nFrameRate; pCodecContext->time_base = pOutputStream->time_base; pCodecContext->gop_size = 12; /* emit one intra frame every twelve frames at most */ pCodecContext->pix_fmt = STREAM_PIX_FMT; break; default: break; } /* copy the stream parameters to the muxer */ avcodec_parameters_from_context(pOutputStream->codecpar, pCodecContext); avio_open(&pOutputFormatContext->pb, "Destination.mp4", AVIO_FLAG_WRITE); // Start writing AVDictionary* pDict = NULL; avformat_write_header(pOutputFormatContext, &pDict); // Process packets AVPacket packet; int64_t nCurrentDts = 0; int64_t nDuration = 0; int nReadResult = 0; while (nReadResult == 0) { nReadResult = av_read_frame(m_pInputFormatContext, &packet); // At this point, packet.dts == AV_NOPTS_VALUE. // The duration field of the packet contains valid data packet.flags |= AV_PKT_FLAG_KEY; nDuration = packet.duration; packet.dts = nCurrentDts; packet.dts = av_rescale_q(nCurrentDts, pOutputFormatContext->streams[0]->codec->time_base, pOutputFormatContext->streams[0]->time_base); av_interleaved_write_frame(pOutputFormatContext, &packet); nCurrentDts += nDuration; nDuration += packet.duration; av_free_packet(&packet); } av_write_trailer(pOutputFormatContext);
The properties for the Destination.mp4 file I receive indicate it is about 1 hour long with frame rate 0. I am sure the culprit is in the way I calculate dts values for each packet and use av_rescale_q(), but I do not have sufficient understanding of the avformat library to figure out the proper way to do it. Any help will be appreciated!
-
FFmpeg - What does non monotonically increasing dts mean ?
24 mars, par Mukund ManikarnikeObservations - Part - I
I saw a suggestion elsewhere to run the following command to see if there's something wrong with my .mp4.
ffmpeg -v error -i ~/Desktop/5_minute_sync_output_15mn.mp4 -f null - 2>error.log
When I run the above command, I see a whole bunch of the logs on the lines of what's shown below.
Application provided invalid, non monotonically increasing dts to muxer in stream 0: 15635 >= 15635
This, from searching and reading up quite a bit, I understand that the decoding timestamp isn't in sequential order.
Observations - Part II
But, inspecting the frames of the same mp4 using the following command and some post processing, I don't see
pkt_dts
within the frames_info json being out of order for either of the video or audio streams.ffprobe -loglevel panic -of json -show_frames ~/Desktop/5_minute_sync_output_15mn.mp4
This makes me doubt my initial understanding in Observations - Part - I
Are these 2 things not related? Any help on this will be greatly appreciated.