Newest 'ffmpeg' Questions - Stack Overflow
Les articles publiés sur le site
-
Create a video from JPG without have all pictures at begining with ffmpeg
30 mars, par ChklangI have a game which can create some screenshots, and I want to transform them to mp4 video. So I've the next command:
ffmpeg -framerate 15 -i %06d.png -s hd1080 -vcodec libx264 -r 30 timelapse.mp4
But my game lasts 8h, so, after have auto-compress pictures, I've more than 9To of pictures. So I want to start the ffmpeg process before the end of pictures generation, so I want that ffmpeg wait the next picture to digest it.
How can I do it?
-
FFMEG option scale=-1 and scale=-2
29 mars, par smalI tried to convert a video and resize it with scale=-1:720, but got the error "width not divisible by 2". And I solved it with: scale=-2:720. What are the differences between
scale=-1:720
and
scale=-2:720
-
How to automatically rotate video based on camera orientation while recording ?
29 mars, par jestrabikrI am developing Mediasoup SFU and client web app, where I am recording client's stream and sending it to FFmpeg as plain RTP. FFmpeg creates HLS recording (.m3u8 and .ts files), because I need to be able to switch between WebRTC live stream and HLS recording before live stream ends.
My problem is that, when I am testing the app and I rotate my phone 90 degrees, the recording's aspect ratio stays the same but the image is rotated (as shown on images 1.1, 1.2 and 1.3 below). I need for it to change aspect ratio dynamically according to camera orientation. How can I do that using FFmpeg?
On the live stream it works perfectly fine (as shown on the images - 2.1 and 2.2 below), when the phone is rotated, the aspect ration is changed and video is shown correctly. I think it works on live stream because maybe WebRTC is signaling orientation changes somehow (but it does not project to the recording).
These are my ffmpeg command arguments for recording (version 6.1.1-3ubuntu5):
let commandArgs = [ "-loglevel", "info", "-protocol_whitelist", "pipe,udp,rtp", "-fflags", "+genpts+discardcorrupt", "-reinit_filter", "1", "-strict", "-2", "-f", "sdp", "-i", "pipe:0", "-map", "0:v:0", "-c:v", "libx264", "-b:v", "1500k", "-profile:v", "high", "-level:v", "4.1", "-pix_fmt", "yuv420p", "-g", "30", "-map", "0:a:0", "-c:a", "aac", "-b:a", "128k", "-movflags", "+frag_keyframe+empty_moov", "-f", "hls", "-hls_time", "4", "-hls_list_size", "0", "-hls_flags", "split_by_time", `${filePath}.m3u8` ];
Image 1.1 - Portrait mode in recording:
Image 1.2 - Landscape mode in recording (rotated 90deg to my left side - front camera is on my left side):
Image 1.3 - Landscape mode in recording (rotated 90deg to my right side):
Image 2.1 - Portrait mode in live stream (correct behavior):
Image 2.2 - Landscape mode in live stream (correct behavior):
-
How to get webam frames one by one but also compressed ?
29 mars, par VoracI need to grab frames from the webcam of a laptop, transmit them one by one and the receiving side stitch them into a video. I picked
ffmpeg-python
as wrapper of choice and the example from the docs works right away:#!/usr/bin/env python # In this file: reading frames one by one from the webcam. import ffmpeg width = 640 height = 480 reader = ( ffmpeg .input('/dev/video0', s='{}x{}'.format(width, height)) .output('pipe:', format='rawvideo', pix_fmt='yuv420p') .run_async(pipe_stdout=True) ) # This is here only to test the reader. writer = ( ffmpeg .input('pipe:', format='rawvideo', pix_fmt='yuv420p', s='{}x{}'.format(width, height)) .output('/tmp/test.mp4', format='h264', pix_fmt='yuv420p') .overwrite_output() .run_async(pipe_stdin=True) ) while True: chunk = reader.stdout.read(width * height * 1.5) # yuv print(len(chunk)) writer.stdin.write(chunk)
Now for the compression part.
My reading of the docs is that the input to the reader perhaps needs be
rawvideo
but nothing else does. I tried replacingrawvideo
withh264
in my code but that resulted in empty frames. I'm considering a third invocation looking like this but is that really the correct approach?encoder = ( ffmpeg .input('pipe:', format='rawvideo', pix_fmt='yuv420p', s='{}x{}'.format(width, height)) .output('pipe:', format='h264', pix_fmt='yuv420p') .run_async(pipe_stdin=True, pipe_stdout=True)
-
convert a heif file to png/jpg using ffmpeg
28 mars, par Ajitesh SinghThe use case is very straight forward. Imagemagick is able to do the conversion but I want to do it with ffmpeg. Here is the all commands I have tried and all of them gives moov atom not found error.
ffmpeg -i /Users/ajitesh/Downloads/sample1.heif -c:v png -pix_fmt rgb48 /Users/ajitesh/Downloads/sample.png
Output
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f85aa813200] moov atom not found /Users/ajitesh/Downloads/sample1.heif: Invalid data found when processing input
it seems like moov atom is actually not present by trying to extract the location of moov atom using the following command
ffmpeg -v trace -i /Users/ajitesh/Downloads/sample1.heif 2>&1 | grep -e type:\'mdat\' -e type:\'moov\'
Output
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f824c00f000] type:'mdat' parent:'root' sz: 2503083 420 2503495 [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f824c00f000] type:'mdat' parent:'root' sz: 2503083 420 2503495