Recherche avancée

Médias (0)

Mot : - Tags -/tags

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (45)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (6301)

  • Can I convert a django video upload from a form using ffmpeg before storing the video ?

    5 mai 2014, par GetItDone

    I’ve been stuck for weeks trying to use ffmpeg to convert user uploaded videos to flv. I use heroku to host my website, and store my static and media files on amazon S3 with s3boto. The initial video file will upload fine, however when I retrieve the video and run a celery task (in the same view where the initial video file is uploaded), the new file won’t store on S3. I’ve been trying to get this to work for over a month, with no luck, and really no good resources available for learning how to do this, so I figure maybe if I can get the ffmpeg task to run before storing the video I may be able to get it to work. Unfortunately I’m still not a very advanced at python (or django), so I don’t even know if/how this is possible. Anyone have any ideas ? I am willing to use any solution at this point no matter how ugly, as long as it successfully takes video uploads and converts to flv using ffmpeg, with the resulting file being stored on S3. It doesn’t seem that my situation is very common, because no matter where I look, I cannot find a solution that explains what I should be trying to do. Therefore I will be very appreciative of any guidance. Thanks. My relevant code follows :

    #models.py
    def content_file_name(instance, filename):
       ext = filename.split('.')[-1]
       new_file_name = "remove%s.%s" % (uuid.uuid4(), ext)
       return '/'.join(['videos', instance.teacher.username, new_file_name])

    class BroadcastUpload(models.Model):
       title = models.CharField(max_length=50, verbose_name=_('Title'))
       description = models.TextField(max_length=100, verbose_name=_('Description'))
       teacher = models.ForeignKey(User, null=True, blank=True, related_name='teacher')
       created_date = models.DateTimeField(auto_now_add=True)
       video_upload = models.FileField(upload_to=content_file_name)
       flvfilename = models.CharField(max_length=100, null=True, blank=True)
       videothumbnail = models.CharField(max_length=100, null=True, blank=True)

    #tasks.py
    @task(name='celeryfiles.tasks.convert_flv')
    def convert_flv(video_id):
       video = BroadcastUpload.objects.get(pk=video_id)
       print "ID: %s" % video.id
       id = video.id
       print "VIDEO NAME: %s" % video.video_upload.name
       teacher = video.teacher
       print "TEACHER: %s" % teacher
       filename = video.video_upload
       sourcefile = "%s%s" % (settings.MEDIA_URL, filename)
       vidfilename = "%s_%s.flv" % (teacher, video.id)
       targetfile = "%svideos/flv/%s" % (settings.MEDIA_URL, vidfilename)
       ffmpeg = "ffmpeg -i %s %s" % (sourcefile, vidfilename)
       try:
           ffmpegresult = subprocess.call(ffmpeg)
           #also tried separately with following line:
           #ffmpegresult = commands.getoutput(ffmpeg)
           print "---------------FFMPEG---------------"
           print "FFMPEGRESULT: %s" % ffmpegresult
       except Exception as e:
           ffmpegresult = None
           print("Failed to convert video file %s to %s" % (sourcefile, targetfile))
           print(traceback.format_exc())
       video.flvfilename = vidfilename
       video.save()

    @task(name='celeryfiles.tasks.ffmpeg_image')        
    def ffmpeg_image(video_id):
       video = BroadcastUpload.objects.get(pk=video_id)
       print "ID: %s" %video.id
       id = video.id
       print "VIDEO NAME: %s" % video.video_upload.name
       teacher = video.teacher
       print "TEACHER: %s" % teacher
       filename = video.video_upload
       sourcefile = "%s%s" % (settings.MEDIA_URL, filename)
       imagefilename = "%s_%s.png" % (teacher, video.id)
       thumbnailfilename = "%svideos/flv/%s" % (settings.MEDIA_URL, thumbnailfilename)
       grabimage = "ffmpeg -y -i %s -vframes 1 -ss 00:00:02 -an -vcodec png -f rawvideo -s 320x240 %s" % (sourcefile, thumbnailfilename)
       try:        
            videothumbnail = subprocess.call(grabimage)
            #also tried separately following line:
            #videothumbnail = commands.getoutput(grabimage)
            print "---------------IMAGE---------------"
            print "VIDEOTHUMBNAIL: %s" % videothumbnail
       except Exception as e:
            videothumbnail = None
            print("Failed to convert video file %s to %s" % (sourcefile, thumbnailfilename))
            print(traceback.format_exc())
       video.videothumbnail = imagefilename
       video.save()

    #views.py
    def upload_broadcast(request):
       if request.method == 'POST':
           form = BroadcastUploadForm(request.POST, request.FILES)
           if form.is_valid():
               upload=form.save()
               video_id = upload.id
               image_grab = ffmpeg_image.delay(video_id)
               video_conversion = convert_flv.delay(video_id)
               return HttpResponseRedirect('/current_classes/')
       else:
           form = BroadcastUploadForm(initial={'teacher': request.user,})
       return render_to_response('videos/create_video.html', {'form': form,}, context_instance=RequestContext(request))

    #settings.py
    DEFAULT_FILE_STORAGE = 'myapp.s3utils.MediaRootS3BotoStorage'
    DEFAULT_S3_PATH = "media"
    STATICFILES_STORAGE = 'myapp.s3utils.StaticRootS3BotoStorage'
    STATIC_S3_PATH = "static"
    AWS_STORAGE_BUCKET_NAME = 'my_bucket'
    CLOUDFRONT_DOMAIN = 'domain.cloudfront.net'
    AWS_ACCESS_KEY_ID = 'MY_KEY_ID'
    AWS_SECRET_ACCESS_KEY = 'MY_SECRET_KEY'
    MEDIA_ROOT = '/%s/' % DEFAULT_S3_PATH
    MEDIA_URL = 'http://%s/%s/' % (CLOUDFRONT_DOMAIN, DEFAULT_S3_PATH)
    ...

    #s3utils.py
    from storages.backends.s3boto import S3BotoStorage
    from django.utils.functional import SimpleLazyObject

    StaticRootS3BotoStorage = lambda: S3BotoStorage(location='static')
    MediaRootS3BotoStorage  = lambda: S3BotoStorage(location='media')

    I can add any other info if needed to help me solve my problem.

  • Record video stream in rust

    19 novembre 2024, par El_Loco

    I have bought a stereo camera with global shutter and a frame rate of at most 120 fps. https://www.amazon.com/dp/B0D8T3ZSL4?ref_=pe_386300_442618370_TE_sc_as_ri_0#

    


    My next step is to write a program that can show and record a video with desired fps and resolution.

    


    use opencv::{&#xA;    core, highgui,&#xA;    prelude::*,&#xA;    videoio::{self, VideoCapture},&#xA;    Result,&#xA;};&#xA;&#xA;fn open_camera() -> Result<videocapture> {&#xA;    let capture = videoio::VideoCapture::new(2, videoio::CAP_ANY)?;&#xA;    return Ok(capture);&#xA;}&#xA;fn main() -> Result&lt;()> {&#xA;    let window = "video capture";&#xA;    highgui::named_window(window, highgui::WINDOW_AUTOSIZE)?;&#xA;    let mut cam = open_camera()?;&#xA;    let opened = videoio::VideoCapture::is_opened(&amp;cam)?;&#xA;    if !opened {&#xA;        panic!("Unable to open default camera!");&#xA;    }&#xA;    let width = 3200.0;&#xA;    let height = 1200.0;&#xA;    cam.set(videoio::CAP_PROP_FRAME_WIDTH, width)?;&#xA;    cam.set(videoio::CAP_PROP_FRAME_HEIGHT, height)?;&#xA;&#xA;    // Set the frame rate (FPS)&#xA;    let fps = 60.0;&#xA;        &#xA;    let fourcc = videoio::VideoWriter::fourcc(&#x27;M&#x27;, &#x27;J&#x27;, &#x27;P&#x27;, &#x27;G&#x27;)?;&#xA;    let mut writer = videoio::VideoWriter::new(&#xA;        "video_output.avi",&#xA;        fourcc,&#xA;        fps,&#xA;        core::Size::new(width as i32, height as i32),&#xA;        true,&#xA;    )?;&#xA;&#xA;    if !writer.is_opened()? {&#xA;        println!("Error: Could not open the video writer.");&#xA;    }&#xA;&#xA;    let mut frame = core::Mat::default();&#xA;    let mut ctr = 0;&#xA;    while cam.read(&amp;mut frame)? {&#xA;        if frame.empty() {&#xA;            break;&#xA;        }&#xA;        writer.write(&amp;frame)?;&#xA;        highgui::imshow(window, &amp;frame)?;&#xA;        &#xA;        let key = highgui::wait_key(1)?;&#xA;        if key > 0 {&#xA;            break;&#xA;        }&#xA;        ctr &#x2B;= 1;&#xA;        if ctr == 600 {&#xA;            break;&#xA;        }&#xA;    }&#xA;    cam.release()?;&#xA;    writer.release()?;&#xA;    Ok(())&#xA;}&#xA;</videocapture>

    &#xA;

    When I run this code the frame rate is terrible. Like 1 fps or something. For debugging I tried to run in cheese. There I got 30 fps with full resolution 3200x1200. But I cannot change the fps to 60 fps what I can see.

    &#xA;

    Then I tried to capture a video using ffmpeg :

    &#xA;

    ffmpeg -f v4l2 -framerate 60 -video_size 3200x1200 -i /dev/video2 output.mp4

    &#xA;

    With the following output :

    &#xA;

    [video4linux2,v4l2 @ 0x5a72cbbd1400] The driver changed the time per frame from 1/60 to 1/2&#xA;Input #0, video4linux2,v4l2, from &#x27;/dev/video2&#x27;:&#xA;  Duration: N/A, start: 2744.250608, bitrate: 122880 kb/s&#xA;  Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 3200x1200, 122880 kb/s, 2 fps, 2 tbr, 1000k tbn&#xA;File &#x27;output.mp4&#x27; already exists. Overwrite? [y/N]&#xA;

    &#xA;

    The frame rate is lowered to 2 fps.

    &#xA;

    Then I tried to run v4l2-ctl --list-formats-ext -d 2 with the following output :

    &#xA;

    ioctl: VIDIOC_ENUM_FMT&#xA;        Type: Video Capture&#xA;&#xA;        [0]: &#x27;MJPG&#x27; (Motion-JPEG, compressed)&#xA;                Size: Discrete 3200x1200&#xA;                        Interval: Discrete 0.017s (60.000 fps)&#xA;                        Interval: Discrete 0.033s (30.000 fps)&#xA;                        Interval: Discrete 0.040s (25.000 fps)&#xA;                        Interval: Discrete 0.050s (20.000 fps)&#xA;                        Interval: Discrete 0.067s (15.000 fps)&#xA;                        Interval: Discrete 0.100s (10.000 fps)&#xA;                Size: Discrete 2560x720&#xA;                        Interval: Discrete 0.017s (60.000 fps)&#xA;                        Interval: Discrete 0.033s (30.000 fps)&#xA;                        Interval: Discrete 0.040s (25.000 fps)&#xA;                        Interval: Discrete 0.050s (20.000 fps)&#xA;                        Interval: Discrete 0.067s (15.000 fps)&#xA;                        Interval: Discrete 0.100s (10.000 fps)&#xA;                Size: Discrete 1600x600&#xA;                        Interval: Discrete 0.008s (120.000 fps)&#xA;                        Interval: Discrete 0.017s (60.000 fps)&#xA;                        Interval: Discrete 0.033s (30.000 fps)&#xA;                        Interval: Discrete 0.040s (25.000 fps)&#xA;                        Interval: Discrete 0.050s (20.000 fps)&#xA;                        Interval: Discrete 0.067s (15.000 fps)&#xA;

    &#xA;

    I then tried to open the camera using qv4land there it seemed to work. Does not seem like I can record a video though.

    &#xA;

    I am using Rust to learn. I want to be able to programmatically be able to record a video somehow and then do computer vision. The easiest would be to do it in Rust. But other solutions are ok.

    &#xA;

    Edit&#xA;I have found some more this morning :

    &#xA;

    v4l2-ctl -d 2 --list-formats-ext&#xA;ioctl: VIDIOC_ENUM_FMT&#xA;    Type: Video Capture&#xA;&#xA;    [0]: &#x27;MJPG&#x27; (Motion-JPEG, compressed)&#xA;        Size: Discrete 3200x1200&#xA;            Interval: Discrete 0.017s (60.000 fps)&#xA;            Interval: Discrete 0.033s (30.000 fps)&#xA;            Interval: Discrete 0.040s (25.000 fps)&#xA;            Interval: Discrete 0.050s (20.000 fps)&#xA;            Interval: Discrete 0.067s (15.000 fps)&#xA;            Interval: Discrete 0.100s (10.000 fps)&#xA;&#xA;    [1]: &#x27;YUYV&#x27; (YUYV 4:2:2)&#xA;        Size: Discrete 3200x1200&#xA;            Interval: Discrete 0.500s (2.000 fps)&#xA;        Size: Discrete 2560x720&#xA;            Interval: Discrete 0.500s (2.000 fps)&#xA;        Size: Discrete 1600x600&#xA;            Interval: Discrete 0.100s (10.000 fps)&#xA;

    &#xA;

    I also found here that order of flags was important for ffmpeg. Running this I can actually record a video with 60 fps :

    &#xA;

    ffmpeg -framerate 60 -f v4l2 -video_size 3200x1200 -input_format mjpeg  -i /dev/video2 output.avi

    &#xA;

    A drawback is that the images does not look very sharp. You can clearly see the pixels. (I am new to video formats etc as well. Before it has just worked.)

    &#xA;

    If I change from avito mkvit is slow again.

    &#xA;

    In the link above I also saw a suggestion to first do :

    &#xA;

    ffmpeg -framerate 60 -f v4l2 -video_size 3200x1200 -input_format mjpeg  -i /dev/video2 -c copy mjpeg.mkv

    &#xA;

    and then :

    &#xA;

    ffmpeg -i mjpeg.mkv -c:v libx264 -crf 23 -preset medium -pix_fmt yuv420p out.mkv

    &#xA;

    which worked. But I am not sure those flags are ideal for the camera I have. I think it is a good start to make it run as expected using command line and ffmpeg. So I know what format to use and that it actually works as intended before doing it programmatically.

    &#xA;

  • Using FFmpeg with URL input causes SIGSEGV in AWS Lambda (Python runtime)

    26 mars, par Dave94

    I'm trying to implement a video converting solution on AWS Lambda following their article named Processing user-generated content using AWS Lambda and FFmpeg.&#xA;However when I run my command with subprocess.Popen() it returns -11 which translates to SIGSEGV (segmentation fault).&#xA;I've tried to process the video with the newest (4.3.1) static build from John Van Sickle's site as with the "official" ffmpeg-lambda-layer but it seems like it doesn't matter which one I use, the result is the same.

    &#xA;

    If I download the video to the Lambda's /tmp directory and add this downloaded file as an input to FFmpeg it works correctly (with the same parameters). However I'm trying to prevent this as the /tmp directory's max. size is only 512 MB which is not quite enough for me.

    &#xA;

    The relevant code which returns SIGSEGV :

    &#xA;

    ffmpeg_cmd = &#x27;/opt/bin/ffmpeg -stream_loop -1 -i "&#x27; &#x2B; s3_source_signed_url &#x2B; &#x27;" -i /opt/bin/audio.mp3 -i /opt/bin/watermark.png -shortest -y -deinterlace -vcodec libx264 -pix_fmt yuv420p -preset veryfast -r 30 -g 60 -b:v 4500k -c:a copy -map 0:v:0 -map 1:a:0 -filter_complex scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,setsar=1,overlay=(W-w)/2:(H-h)/2,format=yuv420p -loglevel verbose -f flv -&#x27;&#xA;command1 = shlex.split(ffmpeg_cmd)&#xA;p1 = subprocess.Popen(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)&#xA;stdout, stderr = p1.communicate()&#xA;print(p1.returncode) #prints -11&#xA;

    &#xA;

    stderr of FFmpeg :

    &#xA;

    ffmpeg version 4.1.3-static https://johnvansickle.com/ffmpeg/  Copyright (c) 2000-2019 the FFmpeg developers&#xA;  built with gcc 6.3.0 (Debian 6.3.0-18&#x2B;deb9u1) 20170516&#xA;  configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc-6 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzvbi --enable-libzimg&#xA;  libavutil      56. 22.100 / 56. 22.100&#xA;  libavcodec     58. 35.100 / 58. 35.100&#xA;  libavformat    58. 20.100 / 58. 20.100&#xA;  libavdevice    58.  5.100 / 58.  5.100&#xA;  libavfilter     7. 40.101 /  7. 40.101&#xA;  libswscale      5.  3.100 /  5.  3.100&#xA;  libswresample   3.  3.100 /  3.  3.100&#xA;  libpostproc    55.  3.100 / 55.  3.100&#xA;[tcp @ 0x728cc00] Starting connection attempt to 52.219.74.177 port 443&#xA;[tcp @ 0x728cc00] Successfully connected to 52.219.74.177 port 443&#xA;[h264 @ 0x729b780] Reinit context to 1280x720, pix_fmt: yuv420p&#xA;Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;https://bucket.s3.amazonaws.com --> presigned url with 15 min expiration time&#x27;:&#xA;  Metadata:&#xA;    major_brand     : mp42&#xA;    minor_version   : 0&#xA;    compatible_brands: mp42mp41isomavc1&#xA;    creation_time   : 2015-09-02T07:42:42.000000Z&#xA;  Duration: 00:00:15.64, start: 0.000000, bitrate: 2640 kb/s&#xA;    Stream #0:0(und): Video: h264 (High), 1 reference frame (avc1 / 0x31637661), yuv420p(tv, bt709, left), 1280x720 [SAR 1:1 DAR 16:9], 2475 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc (default)&#xA;    Metadata:&#xA;      creation_time   : 2015-09-02T07:42:42.000000Z&#xA;      handler_name    : L-SMASH Video Handler&#xA;      encoder         : AVC Coding&#xA;    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 160 kb/s (default)&#xA;    Metadata:&#xA;      creation_time   : 2015-09-02T07:42:42.000000Z&#xA;      handler_name    : L-SMASH Audio Handler&#xA;[mp3 @ 0x733f340] Skipping 0 bytes of junk at 1344.&#xA;Input #1, mp3, from &#x27;/opt/bin/audio.mp3&#x27;:&#xA;  Metadata:&#xA;    encoded_by      : Logic Pro X&#xA;    date            : 2021-01-03&#xA;    coding_history  : &#xA;    time_reference  : 158760000&#xA;    umid            : 0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004500F9E4&#xA;    encoder         : Lavf58.49.100&#xA;  Duration: 00:04:01.21, start: 0.025057, bitrate: 320 kb/s&#xA;    Stream #1:0: Audio: mp3, 44100 Hz, stereo, fltp, 320 kb/s&#xA;    Metadata:&#xA;      encoder         : Lavc58.97&#xA;Input #2, png_pipe, from &#x27;/opt/bin/watermark.png&#x27;:&#xA;  Duration: N/A, bitrate: N/A&#xA;    Stream #2:0: Video: png, 1 reference frame, rgba(pc), 701x190 [SAR 1521:1521 DAR 701:190], 25 tbr, 25 tbn, 25 tbc&#xA;[Parsed_scale_0 @ 0x7341140] w:1920 h:1080 flags:&#x27;bilinear&#x27; interl:0&#xA;Stream mapping:&#xA;  Stream #0:0 (h264) -> scale&#xA;  Stream #2:0 (png) -> overlay:overlay&#xA;  format -> Stream #0:0 (libx264)&#xA;  Stream #1:0 -> #0:1 (copy)&#xA;Press [q] to stop, [?] for help&#xA;[h264 @ 0x72d8600] Reinit context to 1280x720, pix_fmt: yuv420p&#xA;[Parsed_scale_0 @ 0x733c1c0] w:1920 h:1080 flags:&#x27;bilinear&#x27; interl:0&#xA;[graph 0 input from stream 0:0 @ 0x7669200] w:1280 h:720 pixfmt:yuv420p tb:1/25 fr:25/1 sar:1/1 sws_param:flags=2&#xA;[graph 0 input from stream 2:0 @ 0x766a980] w:701 h:190 pixfmt:rgba tb:1/25 fr:25/1 sar:1521/1521 sws_param:flags=2&#xA;[auto_scaler_0 @ 0x7670240] w:iw h:ih flags:&#x27;bilinear&#x27; interl:0&#xA;[deinterlace_in_2_0 @ 0x766b680] auto-inserting filter &#x27;auto_scaler_0&#x27; between the filter &#x27;graph 0 input from stream 2:0&#x27; and the filter &#x27;deinterlace_in_2_0&#x27;&#xA;[Parsed_scale_0 @ 0x733c1c0] w:1280 h:720 fmt:yuv420p sar:1/1 -> w:1920 h:1080 fmt:yuv420p sar:1/1 flags:0x2&#xA;[Parsed_pad_1 @ 0x733ce00] w:1920 h:1080 -> w:1920 h:1080 x:0 y:0 color:0x000000FF&#xA;[Parsed_setsar_2 @ 0x733da00] w:1920 h:1080 sar:1/1 dar:16/9 -> sar:1/1 dar:16/9&#xA;[auto_scaler_0 @ 0x7670240] w:701 h:190 fmt:rgba sar:1521/1521 -> w:701 h:190 fmt:yuva420p sar:1/1 flags:0x2&#xA;[Parsed_overlay_3 @ 0x733e440] main w:1920 h:1080 fmt:yuv420p overlay w:701 h:190 fmt:yuva420p&#xA;[Parsed_overlay_3 @ 0x733e440] [framesync @ 0x733e5a8] Selected 1/50 time base&#xA;[Parsed_overlay_3 @ 0x733e440] [framesync @ 0x733e5a8] Sync level 2&#xA;[libx264 @ 0x72c1c00] using SAR=1/1&#xA;[libx264 @ 0x72c1c00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2&#xA;[libx264 @ 0x72c1c00] profile Progressive High, level 4.0, 4:2:0, 8-bit&#xA;[libx264 @ 0x72c1c00] 264 - core 157 r2969 d4099dd - H.264/MPEG-4 AVC codec - Copyleft 2003-2019 - http://www.videolan.org/x264.html - options: cabac=1 ref=1 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=2 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=9 lookahead_threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=1 keyint=60 keyint_min=6 scenecut=40 intra_refresh=0 rc_lookahead=10 rc=abr mbtree=1 bitrate=4500 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00&#xA;Output #0, flv, to &#x27;pipe:&#x27;:&#xA;  Metadata:&#xA;    major_brand     : mp42&#xA;    minor_version   : 0&#xA;    compatible_brands: mp42mp41isomavc1&#xA;    encoder         : Lavf58.20.100&#xA;    Stream #0:0: Video: h264 (libx264), 1 reference frame ([7][0][0][0] / 0x0007), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], q=-1--1, 4500 kb/s, 30 fps, 1k tbn, 30 tbc (default)&#xA;    Metadata:&#xA;      encoder         : Lavc58.35.100 libx264&#xA;    Side data:&#xA;      cpb: bitrate max/min/avg: 0/0/4500000 buffer size: 0 vbv_delay: -1&#xA;    Stream #0:1: Audio: mp3 ([2][0][0][0] / 0x0002), 44100 Hz, stereo, fltp, 320 kb/s&#xA;    Metadata:&#xA;      encoder         : Lavc58.97&#xA;frame=   27 fps=0.0 q=32.0 size=     247kB time=00:00:00.03 bitrate=59500.0kbits/s speed=0.0672x&#xA;frame=   77 fps= 77 q=27.0 size=    1115kB time=00:00:02.03 bitrate=4478.0kbits/s speed=2.03x&#xA;frame=  126 fps= 83 q=25.0 size=    2302kB time=00:00:04.00 bitrate=4712.4kbits/s speed=2.64x&#xA;frame=  177 fps= 87 q=26.0 size=    3576kB time=00:00:06.03 bitrate=4854.4kbits/s speed=2.97x&#xA;frame=  225 fps= 88 q=25.0 size=    4910kB time=00:00:07.96 bitrate=5047.8kbits/s speed=3.13x&#xA;frame=  272 fps= 89 q=27.0 size=    6189kB time=00:00:09.84 bitrate=5147.9kbits/s speed=3.22x&#xA;frame=  320 fps= 90 q=27.0 size=    7058kB time=00:00:11.78 bitrate=4907.5kbits/s speed=3.31x&#xA;frame=  372 fps= 91 q=26.0 size=    8098kB time=00:00:13.84 bitrate=4791.0kbits/s speed=3.4x&#xA;

    &#xA;

    And that's the end of it. It should continue to do the processing until 00:04:02 as that's my audio's length but it stops here every time (approximately this is my video length).

    &#xA;

    The relevant code which works correctly :

    &#xA;

    ffmpeg_cmd = &#x27;/opt/bin/ffmpeg -stream_loop -1 -i "&#x27; &#x2B; &#x27;/tmp/&#x27; &#x2B; s3_source_key &#x2B; &#x27;" -i /opt/bin/audio.mp3 -i /opt/bin/watermark.png -shortest -y -deinterlace -vcodec libx264 -pix_fmt yuv420p -preset veryfast -r 30 -g 60 -b:v 4500k -c:a copy -map 0:v:0 -map 1:a:0 -filter_complex scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,setsar=1,overlay=(W-w)/2:(H-h)/2,format=yuv420p -loglevel verbose -f flv -&#x27;&#xA;command1 = shlex.split(ffmpeg_cmd)&#xA;p1 = subprocess.Popen(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)&#xA;stdout, stderr = p1.communicate()&#xA;print(p1.returncode) #prints 0&#xA;

    &#xA;

    With this code it repeats the video as many times as it has to do to be as long as the audio.

    &#xA;

    Both versions work correctly on my computer.

    &#xA;

    This question is almost the same but in my case FFmpeg is able to access the signed URL.

    &#xA;