Recherche avancée

Médias (1)

Mot : - Tags -/Rennes

Autres articles (106)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (7956)

  • Python opencv ffmpeg threading exit functions

    29 novembre 2020, par scacchi

    i'm trying to finish the audio/video recording loop by pressing a key (or by other event), if i use the simple time.sleep() in the main loop work perfectly, after 5 second stop video and create file, if i use the function keyboard_pressed() doesnt execute stop_AVrecording() correctly and file_manager().
What's im doing wrong ?
Thanks in advance

    


    from __future__ import print_function, division
import numpy as np
import cv2
import pyaudio
import wave
import threading
import time
import subprocess
import os
import keyboard

class VideoRecorder:
    "Video class based on openCV"
    def __init__(self, name="temp_video.avi", fourcc="MJPG", sizex=640, sizey=480, fps=30):
        self.open = True
        self.fps = fps                  # fps should be the minimum constant rate at which the camera can
        self.fourcc = fourcc            # capture images (with no decrease in speed over time; testing is required)
        self.frameSize = (sizex, sizey) # video formats and sizes also depend and vary according to the camera used
        self.video_filename = name
        self.video_cap = cv2.VideoCapture(1, cv2.CAP_DSHOW)
        self.video_writer = cv2.VideoWriter_fourcc(*self.fourcc)
        self.video_out = cv2.VideoWriter(self.video_filename, self.video_writer, self.fps, self.frameSize)
        self.frame_counts = 1
        self.start_time = time.time()

    def record(self):
        "Video starts being recorded"
        counter = 1
        timer_start = time.time()
        timer_current = 0
        while self.open:
            ret, video_frame = self.video_cap.read()
            if ret:
                self.video_out.write(video_frame)
                # print(str(counter) + " " + str(self.frame_counts) + " frames written " + str(timer_current))
                self.frame_counts += 1
                counter += 1
                timer_current = time.time() - timer_start
                #time.sleep(1/self.fps)
                # gray = cv2.cvtColor(video_frame, cv2.COLOR_BGR2GRAY)
                cv2.imshow('video_frame', video_frame)
                cv2.waitKey(1)
            else:
                break

    def stop(self):
        "Finishes the video recording therefore the thread too"
        if self.open:
            self.open=False
            self.video_out.release()
            self.video_cap.release()
            cv2.destroyAllWindows()

    def start(self):
        "Launches the video recording function using a thread"
        video_thread = threading.Thread(target=self.record)
        video_thread.start()

class AudioRecorder():
    "Audio class based on pyAudio and Wave"
    def __init__(self, filename="temp_audio.wav", rate=44100, fpb=1024, channels=2):
        self.open = True
        self.rate = rate
        self.frames_per_buffer = fpb
        self.channels = channels
        self.format = pyaudio.paInt16
        self.audio_filename = filename
        self.audio = pyaudio.PyAudio()
        self.stream = self.audio.open(format=self.format,
                                      channels=self.channels,
                                      rate=self.rate,
                                      input=True,
                                      frames_per_buffer = self.frames_per_buffer)
        self.audio_frames = []

    def record(self):
        "Audio starts being recorded"
        self.stream.start_stream()
        while self.open:
            data = self.stream.read(self.frames_per_buffer)
            self.audio_frames.append(data)
            if not self.open:
                break

    def stop(self):
        "Finishes the audio recording therefore the thread too"
        if self.open:
            self.open = False
            self.stream.stop_stream()
            self.stream.close()
            self.audio.terminate()
            waveFile = wave.open(self.audio_filename, 'wb')
            waveFile.setnchannels(self.channels)
            waveFile.setsampwidth(self.audio.get_sample_size(self.format))
            waveFile.setframerate(self.rate)
            waveFile.writeframes(b''.join(self.audio_frames))
            waveFile.close()

    def start(self):
        "Launches the audio recording function using a thread"
        audio_thread = threading.Thread(target=self.record)
        audio_thread.start()

def start_AVrecording(filename="test"):
    global video_thread
    global audio_thread
    video_thread = VideoRecorder()
    audio_thread = AudioRecorder()
    audio_thread.start()
    video_thread.start()
    return filename


def start_video_recording(filename="test"):
    global video_thread
    video_thread = VideoRecorder()
    video_thread.start()
    return filename

def start_audio_recording(filename="test"):
    global audio_thread
    audio_thread = AudioRecorder()
    audio_thread.start()
    return filename

def stop_AVrecording(filename="test"):
    audio_thread.stop()
    frame_counts = video_thread.frame_counts
    elapsed_time = time.time() - video_thread.start_time
    recorded_fps = frame_counts / elapsed_time
    print("total frames " + str(frame_counts))
    print("elapsed time " + str(elapsed_time))
    print("recorded fps " + str(recorded_fps))
    video_thread.stop()

    # Makes sure the threads have finished
    while threading.active_count() > 1:
        time.sleep(1)

    # Merging audio and video signal
    if abs(recorded_fps - 6) >= 0.01:    # If the fps rate was higher/lower than expected, re-encode it to the expected
        print("Re-encoding")
        cmd = "ffmpeg -r " + str(recorded_fps) + " -i temp_video.avi -pix_fmt yuv420p -r 6 temp_video2.avi"
        subprocess.call(cmd, shell=True)
        print("Muxing")
        cmd = "ffmpeg -y -ac 2 -channel_layout stereo -i temp_audio.wav -i temp_video2.avi -pix_fmt yuv420p " + filename + ".avi"
        subprocess.call(cmd, shell=True)
    else:
        print("Normal recording\nMuxing")
        cmd = "ffmpeg -y -ac 2 -channel_layout stereo -i temp_audio.wav -i temp_video.avi -pix_fmt yuv420p " + filename + ".avi"
        subprocess.call(cmd, shell=True)
        print("..")

def file_manager(filename="test"):
    "Required and wanted processing of final files"
    local_path = os.getcwd()
    if os.path.exists(str(local_path) + "/temp_audio.wav"):
        os.remove(str(local_path) + "/temp_audio.wav")
    if os.path.exists(str(local_path) + "/temp_video.avi"):
        os.remove(str(local_path) + "/temp_video.avi")
    if os.path.exists(str(local_path) + "/temp_video2.avi"):
        os.remove(str(local_path) + "/temp_video2.avi")
    # if os.path.exists(str(local_path) + "/" + filename + ".avi"):
    #     os.remove(str(local_path) + "/" + filename + ".avi")

def keyboard_pressed():
    while True:
        if keyboard.is_pressed('q'):  # if key 'q' is pressed
            print('------------You Pressed Q Key!--------------')
            #time.sleep(5)
            break


if __name__ == '__main__':
    start_AVrecording()
    #time.sleep(5)
    keyboard_pressed()
    print('-------------Time Sleep-------------------')
    time.sleep(5)
    print('-------------Stop AVrecording-------------')
    stop_AVrecording()
    print('-------------File Manager-----------------')
    file_manager()
    print('-----------------End----------------------')


    


    '''

    


  • Encoding of raw frames (D3D11Texture2D) to an rtsp stream using libav*

    16 juillet 2021, par uzer

    I have managed to create a rtsp stream using libav* and directX texture (which I am obtaining from GDI API using Bitblit method). Here's my approach for creating live rtsp stream :

    


      

    1. Create output context and stream (skipping the checks here)

      


        

      • avformat_alloc_output_context2(&ofmt_ctx, NULL, "rtsp", rtsp_url) ; //RTSP
      • 


      • vid_codec = avcodec_find_encoder(ofmt_ctx->oformat->video_codec) ;
      • 


      • vid_stream = avformat_new_stream(ofmt_ctx,vid_codec) ;
      • 


      • vid_codec_ctx = avcodec_alloc_context3(vid_codec) ;
      • 


      


    2. 


    3. Set codec params

      


      codec_ctx->codec_tag = 0;
codec_ctx->codec_id = ofmt_ctx->oformat->video_codec;
//codec_ctx->codec_type = AVMEDIA_TYPE_VIDEO;
codec_ctx->width = width;   codec_ctx->height = height;
codec_ctx->gop_size = 12;
 //codec_ctx->gop_size = 40;
 //codec_ctx->max_b_frames = 3;
codec_ctx->pix_fmt = target_pix_fmt; // AV_PIX_FMT_YUV420P
codec_ctx->framerate = { stream_fps, 1 };
codec_ctx->time_base = { 1, stream_fps};
if (fctx->oformat->flags & AVFMT_GLOBALHEADER)
 {
     codec_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
 }


      


    4. 


    5. Initialize video stream

      


      if (avcodec_parameters_from_context(stream->codecpar, codec_ctx) < 0)
{
 Debug::Error("Could not initialize stream codec parameters!");
 return false;
}

AVDictionary* codec_options = nullptr;
if (codec->id == AV_CODEC_ID_H264) {
 av_dict_set(&codec_options, "profile", "high", 0);
 av_dict_set(&codec_options, "preset", "fast", 0);
 av_dict_set(&codec_options, "tune", "zerolatency", 0);
}
// open video encoder
int ret = avcodec_open2(codec_ctx, codec, &codec_options);
if (ret<0) {
 Debug::Error("Could not open video encoder: ", avcodec_get_name(codec->id), " error ret: ", AVERROR(ret));
 return false;
}

stream->codecpar->extradata = codec_ctx->extradata;
stream->codecpar->extradata_size = codec_ctx->extradata_size;


      


    6. 


    7. Start streaming

      


      // Create new frame and allocate buffer&#xA;AVFrame* AllocateFrameBuffer(AVCodecContext* codec_ctx, double width, double height)&#xA;{&#xA; AVFrame* frame = av_frame_alloc();&#xA; std::vector framebuf(av_image_get_buffer_size(codec_ctx->pix_fmt, width, height, 1));&#xA; av_image_fill_arrays(frame->data, frame->linesize, framebuf.data(), codec_ctx->pix_fmt, width, height, 1);&#xA; frame->width = width;&#xA; frame->height = height;&#xA; frame->format = static_cast<int>(codec_ctx->pix_fmt);&#xA; //Debug::Log("framebuf size: ", framebuf.size(), "  frame format: ", frame->format);&#xA; return frame;&#xA;}&#xA;&#xA;void RtspStream(AVFormatContext* ofmt_ctx, AVStream* vid_stream, AVCodecContext* vid_codec_ctx, char* rtsp_url)&#xA;{&#xA; printf("Output stream info:\n");&#xA; av_dump_format(ofmt_ctx, 0, rtsp_url, 1);&#xA;&#xA; const int width = WindowManager::Get().GetWindow(RtspStreaming::WindowId())->GetTextureWidth();&#xA; const int height = WindowManager::Get().GetWindow(RtspStreaming::WindowId())->GetTextureHeight();&#xA;&#xA; //DirectX BGRA to h264 YUV420p&#xA; SwsContext* conversion_ctx = sws_getContext(width, height, src_pix_fmt,&#xA;     vid_stream->codecpar->width, vid_stream->codecpar->height, target_pix_fmt, &#xA;     SWS_BICUBIC | SWS_BITEXACT, nullptr, nullptr, nullptr);&#xA;if (!conversion_ctx)&#xA;{&#xA;     Debug::Error("Could not initialize sample scaler!");&#xA;     return;&#xA;}&#xA;&#xA; AVFrame* frame = AllocateFrameBuffer(vid_codec_ctx,vid_codec_ctx->width,vid_codec_ctx->height);&#xA; if (!frame) {&#xA;     Debug::Error("Could not allocate video frame\n");&#xA;     return;&#xA; }&#xA;&#xA;&#xA; if (avformat_write_header(ofmt_ctx, NULL) &lt; 0) {&#xA;     Debug::Error("Error occurred when writing header");&#xA;     return;&#xA; }&#xA; if (av_frame_get_buffer(frame, 0) &lt; 0) {&#xA;     Debug::Error("Could not allocate the video frame data\n");&#xA;     return;&#xA; }&#xA;&#xA; int frame_cnt = 0;&#xA; //av start time in microseconds&#xA; int64_t start_time_av = av_gettime();&#xA; AVRational time_base = vid_stream->time_base;&#xA; AVRational time_base_q = { 1, AV_TIME_BASE };&#xA;&#xA; // frame pixel data info&#xA; int data_size = width * height * 4;&#xA; uint8_t* data = new uint8_t[data_size];&#xA;//    AVPacket* pkt = av_packet_alloc();&#xA;&#xA; while (RtspStreaming::IsStreaming())&#xA; {&#xA;     /* make sure the frame data is writable */&#xA;     if (av_frame_make_writable(frame) &lt; 0)&#xA;     {&#xA;         Debug::Error("Can&#x27;t make frame writable");&#xA;         break;&#xA;     }&#xA;&#xA;     //get copy/ref of the texture&#xA;     //uint8_t* data = WindowManager::Get().GetWindow(RtspStreaming::WindowId())->GetBuffer();&#xA;     if (!WindowManager::Get().GetWindow(RtspStreaming::WindowId())->GetPixels(data, 0, 0, width, height))&#xA;     {&#xA;         Debug::Error("Failed to get frame buffer. ID: ", RtspStreaming::WindowId());&#xA;         std::this_thread::sleep_for (std::chrono::seconds(2));&#xA;         continue;&#xA;     }&#xA;     //printf("got pixels data\n");&#xA;     // convert BGRA to yuv420 pixel format&#xA;     int srcStrides[1] = { 4 * width };&#xA;     if (sws_scale(conversion_ctx, &amp;data, srcStrides, 0, height, frame->data, frame->linesize) &lt; 0)&#xA;     {&#xA;         Debug::Error("Unable to scale d3d11 texture to frame. ", frame_cnt);&#xA;         break;&#xA;     }&#xA;     //Debug::Log("frame pts: ", frame->pts, "  time_base:", av_rescale_q(1, vid_codec_ctx->time_base, vid_stream->time_base));&#xA;     frame->pts = frame_cnt&#x2B;&#x2B;; &#xA;     //frame_cnt&#x2B;&#x2B;;&#xA;     //printf("scale conversion done\n");&#xA;&#xA;     //encode to the video stream&#xA;     int ret = avcodec_send_frame(vid_codec_ctx, frame);&#xA;     if (ret &lt; 0)&#xA;     {&#xA;         Debug::Error("Error sending frame to codec context! ",frame_cnt);&#xA;         break;&#xA;     }&#xA;&#xA;     AVPacket* pkt = av_packet_alloc();&#xA;     //av_init_packet(pkt);&#xA;     ret = avcodec_receive_packet(vid_codec_ctx, pkt);&#xA;     if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;     {&#xA;         //av_packet_unref(pkt);&#xA;         av_packet_free(&amp;pkt);&#xA;         continue;&#xA;     }&#xA;     else if (ret &lt; 0)&#xA;     {&#xA;         Debug::Error("Error during receiving packet: ",AVERROR(ret));&#xA;         //av_packet_unref(pkt);&#xA;         av_packet_free(&amp;pkt);&#xA;         break;&#xA;     }&#xA;&#xA;     if (pkt->pts == AV_NOPTS_VALUE)&#xA;     {&#xA;         //Write PTS&#xA;         //Duration between 2 frames (us)&#xA;         int64_t calc_duration = (double)AV_TIME_BASE / av_q2d(vid_stream->r_frame_rate);&#xA;         //Parameters&#xA;         pkt->pts = (double)(frame_cnt * calc_duration) / (double)(av_q2d(time_base) * AV_TIME_BASE);&#xA;         pkt->dts = pkt->pts;&#xA;         pkt->duration = (double)calc_duration / (double)(av_q2d(time_base) * AV_TIME_BASE);&#xA;     }&#xA;     int64_t pts_time = av_rescale_q(pkt->dts, time_base, time_base_q);&#xA;     int64_t now_time = av_gettime() - start_time_av;&#xA;&#xA;     if (pts_time > now_time)&#xA;         av_usleep(pts_time - now_time);&#xA;&#xA;     //pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));&#xA;     //pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));&#xA;     //pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);&#xA;     //pkt->pos = -1;&#xA;&#xA;     //write frame and send&#xA;     if (av_interleaved_write_frame(ofmt_ctx, pkt)&lt;0)&#xA;     {&#xA;         Debug::Error("Error muxing packet, frame number:",frame_cnt);&#xA;         break;&#xA;     }&#xA;&#xA;     //Debug::Log("RTSP streaming...");&#xA;     //sstd::this_thread::sleep_for(std::chrono::milliseconds(1000/20));&#xA;     //av_packet_unref(pkt);&#xA;     av_packet_free(&amp;pkt);&#xA; }&#xA;&#xA; //av_free_packet(pkt);&#xA; delete[] data;&#xA;&#xA; /* Write the trailer, if any. The trailer must be written before you&#xA;  * close the CodecContexts open when you wrote the header; otherwise&#xA;  * av_write_trailer() may try to use memory that was freed on&#xA;  * av_codec_close(). */&#xA; av_write_trailer(ofmt_ctx);&#xA; av_frame_unref(frame);&#xA; av_frame_free(&amp;frame);&#xA; printf("streaming thread CLOSED!\n");&#xA;}&#xA;</int>

      &#xA;

    8. &#xA;

    &#xA;

    Now, this allows me to connect to my rtsp server and maintain the connection. However, on the rtsp client side I am getting either gray or single static frame as shown below :

    &#xA;

    static frame on client side

    &#xA;

    Would appreciate if you can help with following questions :

    &#xA;

      &#xA;
    1. Firstly, why the stream is not working in spite of continued connection to the server and updating frames ?
    2. &#xA;

    3. Video codec. By default rtsp format uses Mpeg4 codec, is it possible to use h264 ? When I manually set it to AV_CODEC_ID_H264 the program fails at avcodec_open2 with return value of -22.
    4. &#xA;

    5. Do I need to create and allocate new "AVFrame" and "AVPacket" for every frame ? Or can I just reuse global variable for this ?
    6. &#xA;

    7. Do I need to explicitly define some code for real-time streaming ? (Like in ffmpeg we use "-re" flag).
    8. &#xA;

    &#xA;

    Would be great if you can point out some example code for creating livestream. I have checked following resources :

    &#xA;

    &#xA;

    Update

    &#xA;

    While test I found that I am able to play the stream using ffplay, while it's getting stuck on VLC player. Here is snapshot on the ffplay log

    &#xA;

    ffplay log

    &#xA;

  • Using FFmpeg with URL input causes SIGSEGV in AWS Lambda (Python runtime)

    26 mars, par Dave94

    I'm trying to implement a video converting solution on AWS Lambda following their article named Processing user-generated content using AWS Lambda and FFmpeg.&#xA;However when I run my command with subprocess.Popen() it returns -11 which translates to SIGSEGV (segmentation fault).&#xA;I've tried to process the video with the newest (4.3.1) static build from John Van Sickle's site as with the "official" ffmpeg-lambda-layer but it seems like it doesn't matter which one I use, the result is the same.

    &#xA;

    If I download the video to the Lambda's /tmp directory and add this downloaded file as an input to FFmpeg it works correctly (with the same parameters). However I'm trying to prevent this as the /tmp directory's max. size is only 512 MB which is not quite enough for me.

    &#xA;

    The relevant code which returns SIGSEGV :

    &#xA;

    ffmpeg_cmd = &#x27;/opt/bin/ffmpeg -stream_loop -1 -i "&#x27; &#x2B; s3_source_signed_url &#x2B; &#x27;" -i /opt/bin/audio.mp3 -i /opt/bin/watermark.png -shortest -y -deinterlace -vcodec libx264 -pix_fmt yuv420p -preset veryfast -r 30 -g 60 -b:v 4500k -c:a copy -map 0:v:0 -map 1:a:0 -filter_complex scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,setsar=1,overlay=(W-w)/2:(H-h)/2,format=yuv420p -loglevel verbose -f flv -&#x27;&#xA;command1 = shlex.split(ffmpeg_cmd)&#xA;p1 = subprocess.Popen(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)&#xA;stdout, stderr = p1.communicate()&#xA;print(p1.returncode) #prints -11&#xA;

    &#xA;

    stderr of FFmpeg :

    &#xA;

    ffmpeg version 4.1.3-static https://johnvansickle.com/ffmpeg/  Copyright (c) 2000-2019 the FFmpeg developers&#xA;  built with gcc 6.3.0 (Debian 6.3.0-18&#x2B;deb9u1) 20170516&#xA;  configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc-6 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzvbi --enable-libzimg&#xA;  libavutil      56. 22.100 / 56. 22.100&#xA;  libavcodec     58. 35.100 / 58. 35.100&#xA;  libavformat    58. 20.100 / 58. 20.100&#xA;  libavdevice    58.  5.100 / 58.  5.100&#xA;  libavfilter     7. 40.101 /  7. 40.101&#xA;  libswscale      5.  3.100 /  5.  3.100&#xA;  libswresample   3.  3.100 /  3.  3.100&#xA;  libpostproc    55.  3.100 / 55.  3.100&#xA;[tcp @ 0x728cc00] Starting connection attempt to 52.219.74.177 port 443&#xA;[tcp @ 0x728cc00] Successfully connected to 52.219.74.177 port 443&#xA;[h264 @ 0x729b780] Reinit context to 1280x720, pix_fmt: yuv420p&#xA;Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;https://bucket.s3.amazonaws.com --> presigned url with 15 min expiration time&#x27;:&#xA;  Metadata:&#xA;    major_brand     : mp42&#xA;    minor_version   : 0&#xA;    compatible_brands: mp42mp41isomavc1&#xA;    creation_time   : 2015-09-02T07:42:42.000000Z&#xA;  Duration: 00:00:15.64, start: 0.000000, bitrate: 2640 kb/s&#xA;    Stream #0:0(und): Video: h264 (High), 1 reference frame (avc1 / 0x31637661), yuv420p(tv, bt709, left), 1280x720 [SAR 1:1 DAR 16:9], 2475 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc (default)&#xA;    Metadata:&#xA;      creation_time   : 2015-09-02T07:42:42.000000Z&#xA;      handler_name    : L-SMASH Video Handler&#xA;      encoder         : AVC Coding&#xA;    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 160 kb/s (default)&#xA;    Metadata:&#xA;      creation_time   : 2015-09-02T07:42:42.000000Z&#xA;      handler_name    : L-SMASH Audio Handler&#xA;[mp3 @ 0x733f340] Skipping 0 bytes of junk at 1344.&#xA;Input #1, mp3, from &#x27;/opt/bin/audio.mp3&#x27;:&#xA;  Metadata:&#xA;    encoded_by      : Logic Pro X&#xA;    date            : 2021-01-03&#xA;    coding_history  : &#xA;    time_reference  : 158760000&#xA;    umid            : 0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004500F9E4&#xA;    encoder         : Lavf58.49.100&#xA;  Duration: 00:04:01.21, start: 0.025057, bitrate: 320 kb/s&#xA;    Stream #1:0: Audio: mp3, 44100 Hz, stereo, fltp, 320 kb/s&#xA;    Metadata:&#xA;      encoder         : Lavc58.97&#xA;Input #2, png_pipe, from &#x27;/opt/bin/watermark.png&#x27;:&#xA;  Duration: N/A, bitrate: N/A&#xA;    Stream #2:0: Video: png, 1 reference frame, rgba(pc), 701x190 [SAR 1521:1521 DAR 701:190], 25 tbr, 25 tbn, 25 tbc&#xA;[Parsed_scale_0 @ 0x7341140] w:1920 h:1080 flags:&#x27;bilinear&#x27; interl:0&#xA;Stream mapping:&#xA;  Stream #0:0 (h264) -> scale&#xA;  Stream #2:0 (png) -> overlay:overlay&#xA;  format -> Stream #0:0 (libx264)&#xA;  Stream #1:0 -> #0:1 (copy)&#xA;Press [q] to stop, [?] for help&#xA;[h264 @ 0x72d8600] Reinit context to 1280x720, pix_fmt: yuv420p&#xA;[Parsed_scale_0 @ 0x733c1c0] w:1920 h:1080 flags:&#x27;bilinear&#x27; interl:0&#xA;[graph 0 input from stream 0:0 @ 0x7669200] w:1280 h:720 pixfmt:yuv420p tb:1/25 fr:25/1 sar:1/1 sws_param:flags=2&#xA;[graph 0 input from stream 2:0 @ 0x766a980] w:701 h:190 pixfmt:rgba tb:1/25 fr:25/1 sar:1521/1521 sws_param:flags=2&#xA;[auto_scaler_0 @ 0x7670240] w:iw h:ih flags:&#x27;bilinear&#x27; interl:0&#xA;[deinterlace_in_2_0 @ 0x766b680] auto-inserting filter &#x27;auto_scaler_0&#x27; between the filter &#x27;graph 0 input from stream 2:0&#x27; and the filter &#x27;deinterlace_in_2_0&#x27;&#xA;[Parsed_scale_0 @ 0x733c1c0] w:1280 h:720 fmt:yuv420p sar:1/1 -> w:1920 h:1080 fmt:yuv420p sar:1/1 flags:0x2&#xA;[Parsed_pad_1 @ 0x733ce00] w:1920 h:1080 -> w:1920 h:1080 x:0 y:0 color:0x000000FF&#xA;[Parsed_setsar_2 @ 0x733da00] w:1920 h:1080 sar:1/1 dar:16/9 -> sar:1/1 dar:16/9&#xA;[auto_scaler_0 @ 0x7670240] w:701 h:190 fmt:rgba sar:1521/1521 -> w:701 h:190 fmt:yuva420p sar:1/1 flags:0x2&#xA;[Parsed_overlay_3 @ 0x733e440] main w:1920 h:1080 fmt:yuv420p overlay w:701 h:190 fmt:yuva420p&#xA;[Parsed_overlay_3 @ 0x733e440] [framesync @ 0x733e5a8] Selected 1/50 time base&#xA;[Parsed_overlay_3 @ 0x733e440] [framesync @ 0x733e5a8] Sync level 2&#xA;[libx264 @ 0x72c1c00] using SAR=1/1&#xA;[libx264 @ 0x72c1c00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2&#xA;[libx264 @ 0x72c1c00] profile Progressive High, level 4.0, 4:2:0, 8-bit&#xA;[libx264 @ 0x72c1c00] 264 - core 157 r2969 d4099dd - H.264/MPEG-4 AVC codec - Copyleft 2003-2019 - http://www.videolan.org/x264.html - options: cabac=1 ref=1 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=2 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=9 lookahead_threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=1 keyint=60 keyint_min=6 scenecut=40 intra_refresh=0 rc_lookahead=10 rc=abr mbtree=1 bitrate=4500 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00&#xA;Output #0, flv, to &#x27;pipe:&#x27;:&#xA;  Metadata:&#xA;    major_brand     : mp42&#xA;    minor_version   : 0&#xA;    compatible_brands: mp42mp41isomavc1&#xA;    encoder         : Lavf58.20.100&#xA;    Stream #0:0: Video: h264 (libx264), 1 reference frame ([7][0][0][0] / 0x0007), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], q=-1--1, 4500 kb/s, 30 fps, 1k tbn, 30 tbc (default)&#xA;    Metadata:&#xA;      encoder         : Lavc58.35.100 libx264&#xA;    Side data:&#xA;      cpb: bitrate max/min/avg: 0/0/4500000 buffer size: 0 vbv_delay: -1&#xA;    Stream #0:1: Audio: mp3 ([2][0][0][0] / 0x0002), 44100 Hz, stereo, fltp, 320 kb/s&#xA;    Metadata:&#xA;      encoder         : Lavc58.97&#xA;frame=   27 fps=0.0 q=32.0 size=     247kB time=00:00:00.03 bitrate=59500.0kbits/s speed=0.0672x&#xA;frame=   77 fps= 77 q=27.0 size=    1115kB time=00:00:02.03 bitrate=4478.0kbits/s speed=2.03x&#xA;frame=  126 fps= 83 q=25.0 size=    2302kB time=00:00:04.00 bitrate=4712.4kbits/s speed=2.64x&#xA;frame=  177 fps= 87 q=26.0 size=    3576kB time=00:00:06.03 bitrate=4854.4kbits/s speed=2.97x&#xA;frame=  225 fps= 88 q=25.0 size=    4910kB time=00:00:07.96 bitrate=5047.8kbits/s speed=3.13x&#xA;frame=  272 fps= 89 q=27.0 size=    6189kB time=00:00:09.84 bitrate=5147.9kbits/s speed=3.22x&#xA;frame=  320 fps= 90 q=27.0 size=    7058kB time=00:00:11.78 bitrate=4907.5kbits/s speed=3.31x&#xA;frame=  372 fps= 91 q=26.0 size=    8098kB time=00:00:13.84 bitrate=4791.0kbits/s speed=3.4x&#xA;

    &#xA;

    And that's the end of it. It should continue to do the processing until 00:04:02 as that's my audio's length but it stops here every time (approximately this is my video length).

    &#xA;

    The relevant code which works correctly :

    &#xA;

    ffmpeg_cmd = &#x27;/opt/bin/ffmpeg -stream_loop -1 -i "&#x27; &#x2B; &#x27;/tmp/&#x27; &#x2B; s3_source_key &#x2B; &#x27;" -i /opt/bin/audio.mp3 -i /opt/bin/watermark.png -shortest -y -deinterlace -vcodec libx264 -pix_fmt yuv420p -preset veryfast -r 30 -g 60 -b:v 4500k -c:a copy -map 0:v:0 -map 1:a:0 -filter_complex scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,setsar=1,overlay=(W-w)/2:(H-h)/2,format=yuv420p -loglevel verbose -f flv -&#x27;&#xA;command1 = shlex.split(ffmpeg_cmd)&#xA;p1 = subprocess.Popen(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)&#xA;stdout, stderr = p1.communicate()&#xA;print(p1.returncode) #prints 0&#xA;

    &#xA;

    With this code it repeats the video as many times as it has to do to be as long as the audio.

    &#xA;

    Both versions work correctly on my computer.

    &#xA;

    This question is almost the same but in my case FFmpeg is able to access the signed URL.

    &#xA;