Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • How can I display an image from Uint8List with the format AVFormat.RGBA8888 more quickly in Flutter ?

    27 mars, par Salawesh

    I'm looking to display images as quickly as possible. input data is in the form of Uint8List from dart:typed_data, data is encoded in AVFormat.RGBA8888 from ffmpeg.

    I'm looking for a solution to improve the performance of my graphics rendering code. And to see if it's possible to do it in a thread (isolate or compute).

    Here's my current working code.

        final buffer = await ui.ImmutableBuffer.fromUint8List(data.data);
        final descriptor = ui.ImageDescriptor.raw(
          buffer,
          width: data.width,
          height: data.height,
          pixelFormat: ui.PixelFormat.rgba8888,
        );
        final codec = await descriptor.instantiateCodec(); // native codec of ui.Image
        final frameInfo = await codec.getNextFrame();
    

    This is done in my main thread

  • How to manage hls in Nginx RTMP module

    27 mars, par syrkandonut

    I would like to manage the hls broadcast on request, like stop/start or some other way in Nginx RMTP module. My rtmp server needs to support many cameras, however, when it does ffmpeg exec for 200-300 rtmp streams, this is very difficult for the processor, so I would like to execute the ffmpeg command in parallel only on request, how could this be done?

    Rtmp Server

    rtmp {
        server {
            listen 1935;
            chunk_size 8192;
    
            application live {
                live on;
                record off;
                drop_idle_publisher 10s;
                allow publish all;
    
                on_publish rtmp-router:8082/on_publish;
    
                  exec ffmpeg -i rtmp://localhost:1935/live/$name
                  -f lavfi -i anullsrc -c:v copy -c:a aac -shortest -f flv rtmp://localhost:1935/hls/$name_main;
            }
    
    
            application hls {
                live on;
                hls on;
                hls_fragment_naming system;
                hls_fragment 2;
                hls_playlist_length 4;
                hls_path /opt/data/hls;
                hls_nested on;
    
                hls_variant _main BANDWIDTH=878000,RESOLUTION=640x360;
            }
        }
    }
    

    I would like to solve this through nginx or python itself, since the server running with threads is written in FastAPI.

  • VLC dead input for RTP stream

    27 mars, par CaptainCheese

    I'm working on creating an rtp stream that's meant to display live waveform data from Pioneer prolink players. The motivation for sending this video out is to be able to receive it in a flutter frontend. I initially was just sending a base-24 encoding of the raw ARGB packed ints per frame across a Kafka topic to it but processing this data in flutter proved to be untenable and was bogging down the main UI thread. Not sure if this is the most optimal way of going about this but just trying to get anything to work if it means some speedup on the frontend. So the issue the following implementation is experiencing is that when I run vlc --rtsp-timeout=120000 --network-caching=30000 -vvvv stream_1.sdp where

    % cat stream_1.sdp
    v=0
    o=- 0 1 IN IP4 127.0.0.1
    s=RTP Stream
    c=IN IP4 127.0.0.1
    t=0 0
    a=tool:libavformat
    m=video 5007 RTP/AVP 96
    a=rtpmap:96 H264/90000
    

    I see (among other questionable logs) the following:

    [0000000144c44d10] live555 demux error: no data received in 10s, aborting
    [00000001430ee2f0] main input debug: EOF reached
    [0000000144b160c0] main decoder debug: killing decoder fourcc `h264'
    [0000000144b160c0] main decoder debug: removing module "videotoolbox"
    [0000000144b164a0] main packetizer debug: removing module "h264"
    [0000000144c44d10] main demux debug: removing module "live555"
    [0000000144c45bb0] main stream debug: removing module "record"
    [0000000144a64960] main stream debug: removing module "cache_read"
    [0000000144c29c00] main stream debug: removing module "filesystem"
    [00000001430ee2f0] main input debug: Program doesn't contain anymore ES
    [0000000144806260] main playlist debug: dead input
    [0000000144806260] main playlist debug: changing item without a request (current 0/1)
    [0000000144806260] main playlist debug: nothing to play
    [0000000142e083c0] macosx interface debug: Playback has been ended
    [0000000142e083c0] macosx interface debug: Releasing IOKit system sleep blocker (37463)
    

    This is sort of confusing because when I run ffmpeg -protocol_whitelist file,crypto,data,rtp,udp -i stream_1.sdp -vcodec libx264 -f null - I see a number logs about

    [h264 @ 0x139304080] non-existing PPS 0 referenced
        Last message repeated 1 times
    [h264 @ 0x139304080] decode_slice_header error
    [h264 @ 0x139304080] no frame!
    

    After which I see the stream is received and I start getting telemetry on it:

    Input #0, sdp, from 'stream_1.sdp':
      Metadata:
        title           : RTP Stream
      Duration: N/A, start: 0.016667, bitrate: N/A
      Stream #0:0: Video: h264 (Constrained Baseline), yuv420p(progressive), 1200x200, 60 fps, 60 tbr, 90k tbn
    Stream mapping:
      Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
    Press [q] to stop, [?] for help
    [libx264 @ 0x107f04f40] using cpu capabilities: ARMv8 NEON
    [libx264 @ 0x107f04f40] profile High, level 3.1, 4:2:0, 8-bit
    Output #0, null, to 'pipe:':
      Metadata:
        title           : RTP Stream
        encoder         : Lavf61.7.100
      Stream #0:0: Video: h264, yuv420p(tv, progressive), 1200x200, q=2-31, 60 fps, 60 tbn
          Metadata:
            encoder         : Lavc61.19.101 libx264
          Side data:
            cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
    [out#0/null @ 0x60000069c000] video:144KiB audio:0KiB subtitle:0KiB other streams:0KiB global headers:0KiB muxing overhead: unknown
    frame= 1404 fps= 49 q=-1.0 Lsize=N/A time=00:00:23.88 bitrate=N/A speed=0.834x
    

    Not sure why VLC is turning me down like some kind of Berghain bouncer that lets nobody in the entire night.

    I initially tried just converting the ARGB ints to a YUV420p buffer and used this to create the Frame objects but I couldn't for the life of me figure out how to properly initialize it as the attempts I made kept spitting out garbled junk.

    Please go easy on me, I've made an unhealthy habit of resolving nearly all of my coding questions by simply lurking the internet for answers but that's not really helping me solve this issue.

    Here's the Java I'm working on (the meat of the rtp comms occurs within updateWaveformForPlayer()):

    package com.bugbytz.prolink;
    
    import org.apache.kafka.clients.producer.KafkaProducer;
    import org.apache.kafka.clients.producer.Producer;
    import org.apache.kafka.clients.producer.ProducerConfig;
    import org.apache.kafka.clients.producer.ProducerRecord;
    import org.bytedeco.ffmpeg.global.avcodec;
    import org.bytedeco.ffmpeg.global.avutil;
    import org.bytedeco.javacv.FFmpegFrameGrabber;
    import org.bytedeco.javacv.FFmpegFrameRecorder;
    import org.bytedeco.javacv.FFmpegLogCallback;
    import org.bytedeco.javacv.Frame;
    import org.bytedeco.javacv.FrameGrabber;
    import org.deepsymmetry.beatlink.CdjStatus;
    import org.deepsymmetry.beatlink.DeviceAnnouncement;
    import org.deepsymmetry.beatlink.DeviceAnnouncementAdapter;
    import org.deepsymmetry.beatlink.DeviceFinder;
    import org.deepsymmetry.beatlink.Util;
    import org.deepsymmetry.beatlink.VirtualCdj;
    import org.deepsymmetry.beatlink.data.BeatGridFinder;
    import org.deepsymmetry.beatlink.data.CrateDigger;
    import org.deepsymmetry.beatlink.data.MetadataFinder;
    import org.deepsymmetry.beatlink.data.TimeFinder;
    import org.deepsymmetry.beatlink.data.WaveformDetail;
    import org.deepsymmetry.beatlink.data.WaveformDetailComponent;
    import org.deepsymmetry.beatlink.data.WaveformFinder;
    
    import java.awt.*;
    import java.awt.image.BufferedImage;
    import java.io.File;
    import java.nio.ByteBuffer;
    import java.text.DecimalFormat;
    import java.util.ArrayList;
    import java.util.HashMap;
    import java.util.HashSet;
    import java.util.Map;
    import java.util.Properties;
    import java.util.Set;
    import java.util.concurrent.ExecutionException;
    import java.util.concurrent.Executors;
    import java.util.concurrent.ScheduledExecutorService;
    import java.util.concurrent.ScheduledFuture;
    import java.util.concurrent.TimeUnit;
    
    import static org.bytedeco.ffmpeg.global.avutil.AV_PIX_FMT_RGB24;
    
    public class App {
        public static ArrayList tracks = new ArrayList<>();
        public static boolean dbRead = false;
        public static Properties props = new Properties();
        private static Map recorders = new HashMap<>();
        private static Map frameCount = new HashMap<>();
    
        private static final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
        private static final int FPS = 60;
        private static final int FRAME_INTERVAL_MS = 1000 / FPS;
    
        private static Map schedules = new HashMap<>();
    
        private static Set streamingPlayers = new HashSet<>();
    
        public static String byteArrayToMacString(byte[] macBytes) {
            StringBuilder sb = new StringBuilder();
            for (int i = 0; i < macBytes.length; i++) {
                sb.append(String.format("%02X%s", macBytes[i], (i < macBytes.length - 1) ? ":" : ""));
            }
            return sb.toString();
        }
    
        private static void updateWaveformForPlayer(int player) throws Exception {
            Integer frame_for_player = frameCount.get(player);
            if (frame_for_player == null) {
                frame_for_player = 0;
                frameCount.putIfAbsent(player, frame_for_player);
            }
    
            if (!WaveformFinder.getInstance().isRunning()) {
                WaveformFinder.getInstance().start();
            }
            WaveformDetail detail = WaveformFinder.getInstance().getLatestDetailFor(player);
    
            if (detail != null) {
                WaveformDetailComponent component = (WaveformDetailComponent) detail.createViewComponent(
                        MetadataFinder.getInstance().getLatestMetadataFor(player),
                        BeatGridFinder.getInstance().getLatestBeatGridFor(player)
                );
                component.setMonitoredPlayer(player);
                component.setPlaybackState(player, TimeFinder.getInstance().getTimeFor(player), true);
                component.setAutoScroll(true);
                int width = 1200;
                int height = 200;
                Dimension dimension = new Dimension(width, height);
                component.setPreferredSize(dimension);
                component.setSize(dimension);
                component.setScale(1);
                component.doLayout();
    
                // Create a fresh BufferedImage and clear it before rendering
                BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
                Graphics2D g = image.createGraphics();
                g.clearRect(0, 0, width, height);  // Clear any old content
    
                // Draw waveform into the BufferedImage
                component.paint(g);
                g.dispose();
    
                int port = 5004 + player;
                String inputFile = port + "_" + frame_for_player + ".mp4";
                // Initialize the FFmpegFrameRecorder for YUV420P
                FFmpegFrameRecorder recorder_file = new FFmpegFrameRecorder(inputFile, width, height);
                FFmpegLogCallback.set();  // Enable FFmpeg logging for debugging
                recorder_file.setFormat("mp4");
                recorder_file.setVideoCodec(avcodec.AV_CODEC_ID_H264);
                recorder_file.setPixelFormat(avutil.AV_PIX_FMT_YUV420P);  // Use YUV420P format directly
                recorder_file.setFrameRate(FPS);
    
                // Set video options
                recorder_file.setVideoOption("preset", "ultrafast");
                recorder_file.setVideoOption("tune", "zerolatency");
                recorder_file.setVideoOption("x264-params", "repeat-headers=1");
                recorder_file.setGopSize(FPS);
                try {
                    recorder_file.start();  // Ensure this is called before recording any frames
                    System.out.println("Recorder started successfully for player: " + player);
                } catch (org.bytedeco.javacv.FFmpegFrameRecorder.Exception e) {
                    e.printStackTrace();
                }
    
                // Get all pixels in one call
                int[] pixels = new int[width * height];
                image.getRGB(0, 0, width, height, pixels, 0, width);
                recorder_file.recordImage(width,height,Frame.DEPTH_UBYTE,1,3 * width, AV_PIX_FMT_RGB24, ByteBuffer.wrap(argbToByteArray(pixels, width, height)));
                recorder_file.stop();
                recorder_file.release();
                final FFmpegFrameRecorder recorder = recorders.get(player);
                FFmpegFrameGrabber grabber = new FFmpegFrameGrabber(inputFile);
    
    
                try {
                    grabber.start();
                } catch (Exception e) {
                    e.printStackTrace();
                }
                if (recorder == null) {
                    try {
                        String outputStream = "rtp://127.0.0.1:" + port;
                        FFmpegFrameRecorder initial_recorder = new FFmpegFrameRecorder(outputStream, grabber.getImageWidth(), grabber.getImageHeight());
                        initial_recorder.setFormat("rtp");
                        initial_recorder.setVideoCodec(avcodec.AV_CODEC_ID_H264);
                        initial_recorder.setPixelFormat(avutil.AV_PIX_FMT_YUV420P);
                        initial_recorder.setFrameRate(grabber.getFrameRate());
                        initial_recorder.setGopSize(FPS);
                        initial_recorder.setVideoOption("x264-params", "keyint=60");
                        initial_recorder.setVideoOption("rtsp_transport", "tcp");
                        initial_recorder.start();
                        recorders.putIfAbsent(player, initial_recorder);
                        frameCount.putIfAbsent(player, 0);
                        putToRTP(player, grabber, initial_recorder);
                    }
                    catch (Exception e) {
                        e.printStackTrace();
                    }
                }
                else {
                    putToRTP(player, grabber, recorder);
                }
                File file = new File(inputFile);
                if (file.exists() && file.delete()) {
                    System.out.println("Successfully deleted file: " + inputFile);
                } else {
                    System.out.println("Failed to delete file: " + inputFile);
                }
            }
        }
    
        public static void putToRTP(int player, FFmpegFrameGrabber grabber, FFmpegFrameRecorder recorder) throws FrameGrabber.Exception {
            final Frame frame = grabber.grabFrame();
            int frameCount_local = frameCount.get(player);
            frame.keyFrame = frameCount_local++ % FPS == 0;
            frameCount.put(player, frameCount_local);
            try {
                recorder.record(frame);
            } catch (FFmpegFrameRecorder.Exception e) {
                throw new RuntimeException(e);
            }
        }
        public static byte[] argbToByteArray(int[] argb, int width, int height) {
            int totalPixels = width * height;
            byte[] byteArray = new byte[totalPixels * 3];  // 4 bytes per pixel (ARGB)
    
            for (int i = 0; i < totalPixels; i++) {
                int argbPixel = argb[i];
    
                byteArray[i * 3] = (byte) ((argbPixel >> 16) & 0xFF);  // Red
                byteArray[i * 3 + 1] = (byte) ((argbPixel >> 8) & 0xFF);   // Green
                byteArray[i * 3 + 2] = (byte) (argbPixel & 0xFF);  // Blue
            }
    
            return byteArray;
        }
    
    
        public static void main(String[] args) throws Exception {
            VirtualCdj.getInstance().setDeviceNumber((byte) 4);
            CrateDigger.getInstance().addDatabaseListener(new DBService());
            props.put("bootstrap.servers", "localhost:9092");
            props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
            props.put("value.serializer", "com.bugbytz.prolink.CustomSerializer");
            props.put(ProducerConfig.MAX_REQUEST_SIZE_CONFIG, "20971520");
    
            VirtualCdj.getInstance().addUpdateListener(update -> {
                if (update instanceof CdjStatus) {
                    try (Producer producer = new KafkaProducer<>(props)) {
                        DecimalFormat df_obj = new DecimalFormat("#.##");
                        DeviceStatus deviceStatus = new DeviceStatus(
                                update.getDeviceNumber(),
                                ((CdjStatus) update).isPlaying() || !((CdjStatus) update).isPaused(),
                                ((CdjStatus) update).getBeatNumber(),
                                update.getBeatWithinBar(),
                                Double.parseDouble(df_obj.format(update.getEffectiveTempo())),
                                Double.parseDouble(df_obj.format(Util.pitchToPercentage(update.getPitch()))),
                                update.getAddress().getHostAddress(),
                                byteArrayToMacString(DeviceFinder.getInstance().getLatestAnnouncementFrom(update.getDeviceNumber()).getHardwareAddress()),
                                ((CdjStatus) update).getRekordboxId(),
                                update.getDeviceName()
                        );
                        ProducerRecord record = new ProducerRecord<>("device-status", "device-" + update.getDeviceNumber(), deviceStatus);
                        try {
                            producer.send(record).get();
                        } catch (InterruptedException ex) {
                            throw new RuntimeException(ex);
                        } catch (ExecutionException ex) {
                            throw new RuntimeException(ex);
                        }
                        producer.flush();
                        if (!WaveformFinder.getInstance().isRunning()) {
                            try {
                                WaveformFinder.getInstance().start();
                            } catch (Exception ex) {
                                throw new RuntimeException(ex);
                            }
                        }
                    }
                }
            });
            DeviceFinder.getInstance().addDeviceAnnouncementListener(new DeviceAnnouncementAdapter() {
                @Override
                public void deviceFound(DeviceAnnouncement announcement) {
                    if (!streamingPlayers.contains(announcement.getDeviceNumber())) {
                        streamingPlayers.add(announcement.getDeviceNumber());
                        schedules.putIfAbsent(announcement.getDeviceNumber(), scheduler.scheduleAtFixedRate(() -> {
                            try {
                                Runnable task = () -> {
                                    try {
                                        updateWaveformForPlayer(announcement.getDeviceNumber());
                                    } catch (InterruptedException e) {
                                        System.out.println("Thread interrupted");
                                    } catch (Exception e) {
                                        throw new RuntimeException(e);
                                    }
                                    System.out.println("Lambda thread work completed!");
                                };
                                task.run();
                            } catch (Exception e) {
                                e.printStackTrace();
                            }
                        }, 0, FRAME_INTERVAL_MS, TimeUnit.MILLISECONDS));
                    }
                }
    
                @Override
                public void deviceLost(DeviceAnnouncement announcement) {
                    if (streamingPlayers.contains(announcement.getDeviceNumber())) {
                        schedules.get(announcement.getDeviceNumber()).cancel(true);
                        streamingPlayers.remove(announcement.getDeviceNumber());
                    }
                }
            });
            BeatGridFinder.getInstance().start();
            MetadataFinder.getInstance().start();
            VirtualCdj.getInstance().start();
            TimeFinder.getInstance().start();
            DeviceFinder.getInstance().start();
            CrateDigger.getInstance().start();
    
            try {
                LoadCommandConsumer consumer = new LoadCommandConsumer("localhost:9092", "load-command-group");
                Thread consumerThread = new Thread(consumer::startConsuming);
                consumerThread.start();
    
                Runtime.getRuntime().addShutdownHook(new Thread(() -> {
                    consumer.shutdown();
                    try {
                        consumerThread.join();
                    } catch (InterruptedException e) {
                        Thread.currentThread().interrupt();
                    }
                }));
                Thread.sleep(60000);
            } catch (InterruptedException e) {
                System.out.println("Interrupted, exiting.");
            }
        }
    }
    
  • Can ffmpeg extract closed caption data [closed]

    26 mars, par spinon

    I am currently using ffmpeg to convert videos in various formats to flv files. One request has also come up and that is to get closed caption info out o the file as well. Does anyone have any experience with this or know it can even be done. I don't see any options for it but thought I would ask and see.

  • Using FFmpeg with URL input causes SIGSEGV in AWS Lambda (Python runtime)

    26 mars, par Dave94

    I'm trying to implement a video converting solution on AWS Lambda following their article named Processing user-generated content using AWS Lambda and FFmpeg. However when I run my command with subprocess.Popen() it returns -11 which translates to SIGSEGV (segmentation fault). I've tried to process the video with the newest (4.3.1) static build from John Van Sickle's site as with the "official" ffmpeg-lambda-layer but it seems like it doesn't matter which one I use, the result is the same.

    If I download the video to the Lambda's /tmp directory and add this downloaded file as an input to FFmpeg it works correctly (with the same parameters). However I'm trying to prevent this as the /tmp directory's max. size is only 512 MB which is not quite enough for me.

    The relevant code which returns SIGSEGV:

    ffmpeg_cmd = '/opt/bin/ffmpeg -stream_loop -1 -i "' + s3_source_signed_url + '" -i /opt/bin/audio.mp3 -i /opt/bin/watermark.png -shortest -y -deinterlace -vcodec libx264 -pix_fmt yuv420p -preset veryfast -r 30 -g 60 -b:v 4500k -c:a copy -map 0:v:0 -map 1:a:0 -filter_complex scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,setsar=1,overlay=(W-w)/2:(H-h)/2,format=yuv420p -loglevel verbose -f flv -'
    command1 = shlex.split(ffmpeg_cmd)
    p1 = subprocess.Popen(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    stdout, stderr = p1.communicate()
    print(p1.returncode) #prints -11
    

    stderr of FFmpeg:

    ffmpeg version 4.1.3-static https://johnvansickle.com/ffmpeg/  Copyright (c) 2000-2019 the FFmpeg developers
      built with gcc 6.3.0 (Debian 6.3.0-18+deb9u1) 20170516
      configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc-6 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzvbi --enable-libzimg
      libavutil      56. 22.100 / 56. 22.100
      libavcodec     58. 35.100 / 58. 35.100
      libavformat    58. 20.100 / 58. 20.100
      libavdevice    58.  5.100 / 58.  5.100
      libavfilter     7. 40.101 /  7. 40.101
      libswscale      5.  3.100 /  5.  3.100
      libswresample   3.  3.100 /  3.  3.100
      libpostproc    55.  3.100 / 55.  3.100
    [tcp @ 0x728cc00] Starting connection attempt to 52.219.74.177 port 443
    [tcp @ 0x728cc00] Successfully connected to 52.219.74.177 port 443
    [h264 @ 0x729b780] Reinit context to 1280x720, pix_fmt: yuv420p
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'https://bucket.s3.amazonaws.com --> presigned url with 15 min expiration time':
      Metadata:
        major_brand     : mp42
        minor_version   : 0
        compatible_brands: mp42mp41isomavc1
        creation_time   : 2015-09-02T07:42:42.000000Z
      Duration: 00:00:15.64, start: 0.000000, bitrate: 2640 kb/s
        Stream #0:0(und): Video: h264 (High), 1 reference frame (avc1 / 0x31637661), yuv420p(tv, bt709, left), 1280x720 [SAR 1:1 DAR 16:9], 2475 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc (default)
        Metadata:
          creation_time   : 2015-09-02T07:42:42.000000Z
          handler_name    : L-SMASH Video Handler
          encoder         : AVC Coding
        Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 160 kb/s (default)
        Metadata:
          creation_time   : 2015-09-02T07:42:42.000000Z
          handler_name    : L-SMASH Audio Handler
    [mp3 @ 0x733f340] Skipping 0 bytes of junk at 1344.
    Input #1, mp3, from '/opt/bin/audio.mp3':
      Metadata:
        encoded_by      : Logic Pro X
        date            : 2021-01-03
        coding_history  : 
        time_reference  : 158760000
        umid            : 0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004500F9E4
        encoder         : Lavf58.49.100
      Duration: 00:04:01.21, start: 0.025057, bitrate: 320 kb/s
        Stream #1:0: Audio: mp3, 44100 Hz, stereo, fltp, 320 kb/s
        Metadata:
          encoder         : Lavc58.97
    Input #2, png_pipe, from '/opt/bin/watermark.png':
      Duration: N/A, bitrate: N/A
        Stream #2:0: Video: png, 1 reference frame, rgba(pc), 701x190 [SAR 1521:1521 DAR 701:190], 25 tbr, 25 tbn, 25 tbc
    [Parsed_scale_0 @ 0x7341140] w:1920 h:1080 flags:'bilinear' interl:0
    Stream mapping:
      Stream #0:0 (h264) -> scale
      Stream #2:0 (png) -> overlay:overlay
      format -> Stream #0:0 (libx264)
      Stream #1:0 -> #0:1 (copy)
    Press [q] to stop, [?] for help
    [h264 @ 0x72d8600] Reinit context to 1280x720, pix_fmt: yuv420p
    [Parsed_scale_0 @ 0x733c1c0] w:1920 h:1080 flags:'bilinear' interl:0
    [graph 0 input from stream 0:0 @ 0x7669200] w:1280 h:720 pixfmt:yuv420p tb:1/25 fr:25/1 sar:1/1 sws_param:flags=2
    [graph 0 input from stream 2:0 @ 0x766a980] w:701 h:190 pixfmt:rgba tb:1/25 fr:25/1 sar:1521/1521 sws_param:flags=2
    [auto_scaler_0 @ 0x7670240] w:iw h:ih flags:'bilinear' interl:0
    [deinterlace_in_2_0 @ 0x766b680] auto-inserting filter 'auto_scaler_0' between the filter 'graph 0 input from stream 2:0' and the filter 'deinterlace_in_2_0'
    [Parsed_scale_0 @ 0x733c1c0] w:1280 h:720 fmt:yuv420p sar:1/1 -> w:1920 h:1080 fmt:yuv420p sar:1/1 flags:0x2
    [Parsed_pad_1 @ 0x733ce00] w:1920 h:1080 -> w:1920 h:1080 x:0 y:0 color:0x000000FF
    [Parsed_setsar_2 @ 0x733da00] w:1920 h:1080 sar:1/1 dar:16/9 -> sar:1/1 dar:16/9
    [auto_scaler_0 @ 0x7670240] w:701 h:190 fmt:rgba sar:1521/1521 -> w:701 h:190 fmt:yuva420p sar:1/1 flags:0x2
    [Parsed_overlay_3 @ 0x733e440] main w:1920 h:1080 fmt:yuv420p overlay w:701 h:190 fmt:yuva420p
    [Parsed_overlay_3 @ 0x733e440] [framesync @ 0x733e5a8] Selected 1/50 time base
    [Parsed_overlay_3 @ 0x733e440] [framesync @ 0x733e5a8] Sync level 2
    [libx264 @ 0x72c1c00] using SAR=1/1
    [libx264 @ 0x72c1c00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
    [libx264 @ 0x72c1c00] profile Progressive High, level 4.0, 4:2:0, 8-bit
    [libx264 @ 0x72c1c00] 264 - core 157 r2969 d4099dd - H.264/MPEG-4 AVC codec - Copyleft 2003-2019 - http://www.videolan.org/x264.html - options: cabac=1 ref=1 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=2 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=9 lookahead_threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=1 keyint=60 keyint_min=6 scenecut=40 intra_refresh=0 rc_lookahead=10 rc=abr mbtree=1 bitrate=4500 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, flv, to 'pipe:':
      Metadata:
        major_brand     : mp42
        minor_version   : 0
        compatible_brands: mp42mp41isomavc1
        encoder         : Lavf58.20.100
        Stream #0:0: Video: h264 (libx264), 1 reference frame ([7][0][0][0] / 0x0007), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], q=-1--1, 4500 kb/s, 30 fps, 1k tbn, 30 tbc (default)
        Metadata:
          encoder         : Lavc58.35.100 libx264
        Side data:
          cpb: bitrate max/min/avg: 0/0/4500000 buffer size: 0 vbv_delay: -1
        Stream #0:1: Audio: mp3 ([2][0][0][0] / 0x0002), 44100 Hz, stereo, fltp, 320 kb/s
        Metadata:
          encoder         : Lavc58.97
    frame=   27 fps=0.0 q=32.0 size=     247kB time=00:00:00.03 bitrate=59500.0kbits/s speed=0.0672x
    frame=   77 fps= 77 q=27.0 size=    1115kB time=00:00:02.03 bitrate=4478.0kbits/s speed=2.03x
    frame=  126 fps= 83 q=25.0 size=    2302kB time=00:00:04.00 bitrate=4712.4kbits/s speed=2.64x
    frame=  177 fps= 87 q=26.0 size=    3576kB time=00:00:06.03 bitrate=4854.4kbits/s speed=2.97x
    frame=  225 fps= 88 q=25.0 size=    4910kB time=00:00:07.96 bitrate=5047.8kbits/s speed=3.13x
    frame=  272 fps= 89 q=27.0 size=    6189kB time=00:00:09.84 bitrate=5147.9kbits/s speed=3.22x
    frame=  320 fps= 90 q=27.0 size=    7058kB time=00:00:11.78 bitrate=4907.5kbits/s speed=3.31x
    frame=  372 fps= 91 q=26.0 size=    8098kB time=00:00:13.84 bitrate=4791.0kbits/s speed=3.4x
    

    And that's the end of it. It should continue to do the processing until 00:04:02 as that's my audio's length but it stops here every time (approximately this is my video length).

    The relevant code which works correctly:

    ffmpeg_cmd = '/opt/bin/ffmpeg -stream_loop -1 -i "' + '/tmp/' + s3_source_key + '" -i /opt/bin/audio.mp3 -i /opt/bin/watermark.png -shortest -y -deinterlace -vcodec libx264 -pix_fmt yuv420p -preset veryfast -r 30 -g 60 -b:v 4500k -c:a copy -map 0:v:0 -map 1:a:0 -filter_complex scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,setsar=1,overlay=(W-w)/2:(H-h)/2,format=yuv420p -loglevel verbose -f flv -'
    command1 = shlex.split(ffmpeg_cmd)
    p1 = subprocess.Popen(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    stdout, stderr = p1.communicate()
    print(p1.returncode) #prints 0
    

    With this code it repeats the video as many times as it has to do to be as long as the audio.

    Both versions work correctly on my computer.

    This question is almost the same but in my case FFmpeg is able to access the signed URL.