
Recherche avancée
Médias (91)
-
GetID3 - Boutons supplémentaires
9 avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
-
Core Media Video
4 avril 2013, par
Mis à jour : Juin 2013
Langue : français
Type : Video
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
Autres articles (78)
-
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)
Sur d’autres sites (3391)
-
Announcing our latest open source project : DeviceDetector
This blog post is an announcement for our latest open source project release : DeviceDetector ! The Universal Device Detection library will parse any User Agent and detect the browser, operating system, device used (desktop, tablet, mobile, tv, cars, console, etc.), brand and model.
Read on to learn more about this exciting release.
Why did we create DeviceDetector ?
Our previous library UserAgentParser only had the possibility to detect operating systems and browsers. But as more and more traffic is coming from mobile devices like smartphones and tablets it is getting more and more important to know which devices are used by the websites visitors.
To ensure that the device detection within Piwik will gain the required attention, so it will be as accurate as possible, we decided to move that part of Piwik into a separate project, that we will maintain separately. As an own project we hope the DeviceDetector will gain a better visibility as well as a better support by and for the community !
DeviceDetector is hosted on GitHub at piwik/device-detector. It is also available as composer package through Packagist.
How DeviceDetector works
Every client requesting data from a webserver identifies itself by sending a so-called User-Agent within the request to the server. Those User Agents might contain several information such as :
- client name and version (clients can be browsers or other software like feed readers, media players, apps,…)
- operating system name and version
- device identifier, which can be used to detect the brand and model.
For Example :
Mozilla/5.0 (Linux; Android 4.4.2; Nexus 5 Build/KOT49H) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.99 Mobile Safari/537.36
This User Agent contains following information :
Operating system is
Android 4.4.2
, client uses the browserChrome Mobile 32.0.1700.99
and the device is a GoogleNexus 5
smartphone.What DeviceDetector currently detects
DeviceDetector is able to detect bots, like search engines, feed fetchers, site monitors and so on, five different client types, including around 100 browsers, 15 feed readers, some media players, personal information managers (like mail clients) and mobile apps using the AFNetworking framework, around 80 operating systems and nine different device types (smartphones, tablets, feature phones, consoles, tvs, car browsers, cameras, smart displays and desktop devices) from over 180 brands.
Note : Piwik itself currently does not use the full feature set of DeviceDetector. Client detection is currently not implemented in Piwik (only detected browsers are reported, other clients are marked as Unknown). Client detection will be implemented into Piwik in the future, follow #5413 to stay updated.
Performance of DeviceDetector
Our detections are currently handled by an enormous number of regexes, that are defined in several .YML Files. As parsing these .YML files is a bit slow, DeviceDetector is able to cache the parsed .YML Files. By default DeviceDetector uses a static cache, which means that everything is cached in static variables. As that only improves speed for many detections within one process, there are also adapters to cache in files or memcache for speeding up detections across requests.
How can users help contribute to DeviceDetector ?
Submit your devices that are not detected yet
If you own a device, that is currently not correctly detected by the DeviceDetector, please create a issue on GitHub
In order to check if your device is detected correctly by the DeviceDetector go to your Piwik server, click on ‘Settings’ link, then click on ‘Device Detection’ under the Diagnostic menu. If the data does not match, please copy the displayed User Agent and use that and your device data to create a ticket.Submit a list of your User Agents
In order to create new detections or improve the existing ones, it is necessary for us to have lists of User Agents. If you have a website used by mostly non desktop devices it would be useful if you send a list of the User Agents that visited your website. To do so you need access to your access logs. The following command will extract the User Agents :
zcat ~/path/to/access/logs* | awk -F'"' '{print $6}' | sort | uniq -c | sort -rn | head -n20000 > /home/piwik/top-user-agents.txt
If you want to help us with those data, please get in touch at devicedetector@piwik.org
Submit improvements on GitHub
As DeviceDetector is free/libre library, we invite you to help us improving the detections as well as the code. Please feel free to create tickets and pull requests on Github.
What’s the next big thing for DeviceDetector ?
Please check out the list of issues in device-detector issue tracker.
We hope the community will answer our call for help. Together, we can build DeviceDetector as the most powerful device detection library !
Happy Device Detection,
-
RaspberryPi HLS streaming with nginx and ffmpeg ; v4l2 error : ioctl(VIDIOC_STREAMON) : Protocol error
22 janvier 2021, par Mirco WeberI'm trying to realize a baby monitoring with a Raspberry Pi (Model 4B, 4GB RAM) and an ordinary Webcam (with integrated Mic).
I followed this Tutorial : https://github.com/DeTeam/webcam-stream/blob/master/Tutorial.md


Shortly described :


- 

- I installed and configured an nginx server with rtmp module enabled.
- I installed ffmpeg with this configuration —enable-gpl —enable-nonfree —enable-mmal —enable-omx-rpi
- I tried to stream ;)








The configuration of nginx seems to be working (sometimes streaming works, the server starts without any complication and when the server is up and running, the webpage is displayed).
The configuration of ffmpeg seems to be fine as well, since streaming sometimes works...


I was trying a couple of different ffmpeg-commands ; all of them are sometimes working and sometimes resulting in an error.
The command looks like following :


ffmpeg -re
-f v4l2
-i /dev/video0
-f alsa
-ac 1
-thread_queue_size 4096
-i hw:CARD=Camera,DEV=0
-profile:v high
-level:v 4.1
-vcodec h264_omx
-r 10
-b:v 512k
-s 640x360
-acodec aac
-strict
-2
-ac 2
-ab 32k
-ar 44100
-f flv
rtmp://localhost/show/stream;



Note : I rearranged the code to make it easier to read. In the terminal, it is all in one line.
Note : There is no difference when using
-f video4linux2
instead of-f v4l2


The camera is recognized by the system :


pi@raspberrypi:~ $ v4l2-ctl --list-devices
bcm2835-codec-decode (platform:bcm2835-codec):
 /dev/video10
 /dev/video11
 /dev/video12

bcm2835-isp (platform:bcm2835-isp):
 /dev/video13
 /dev/video14
 /dev/video15
 /dev/video16

HD Web Camera: HD Web Camera (usb-0000:01:00.0-1.2):
 /dev/video0
 /dev/video1



When only using
-i /dev/video0
, audio transmission never worked.
The output ofarecord -L
was :

pi@raspberrypi:~ $ arecord -L
default
 Playback/recording through the PulseAudio sound server
null
 Discard all samples (playback) or generate zero samples (capture)
jack
 JACK Audio Connection Kit
pulse
 PulseAudio Sound Server
usbstream:CARD=Headphones
 bcm2835 Headphones
 USB Stream Output
sysdefault:CARD=Camera
 HD Web Camera, USB Audio
 Default Audio Device
front:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 Front speakers
surround21:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 2.1 Surround output to Front and Subwoofer speakers
surround40:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 4.0 Surround output to Front and Rear speakers
surround41:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 4.1 Surround output to Front, Rear and Subwoofer speakers
surround50:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 5.0 Surround output to Front, Center and Rear speakers
surround51:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 5.1 Surround output to Front, Center, Rear and Subwoofer speakers
surround71:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers
iec958:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 IEC958 (S/PDIF) Digital Audio Output
dmix:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 Direct sample mixing device
dsnoop:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 Direct sample snooping device
hw:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 Direct hardware device without any conversions
plughw:CARD=Camera,DEV=0
 HD Web Camera, USB Audio
 Hardware device with all software conversions
usbstream:CARD=Camera
 HD Web Camera
 USB Stream Output



that's why i added
-i hw:CARD=Camera,DEV=0
.

As mentioned above, it worked very well a couple of times with this configuration and commands.
But very often, i get the following error message when starting to stream :


pi@raspberrypi:~ $ ffmpeg -re -f video4linux2 -i /dev/video0 -f alsa -ac 1 -thread_queue_size 4096 -i hw:CARD=Camera,DEV=0 -profile:v high -level:v 4.1 -vcodec h264_omx -r 10 -b:v 512k -s 640x360 -acodec aac -strict -2 -ac 2 -ab 32k -ar 44100 -f flv rtmp://localhost/show/stream
ffmpeg version N-100673-g553eb07737 Copyright (c) 2000-2021 the FFmpeg developers
 built with gcc 8 (Raspbian 8.3.0-6+rpi1)
 configuration: --enable-gpl --enable-nonfree --enable-mmal --enable-omx-rpi --extra-ldflags=-latomic
 libavutil 56. 63.101 / 56. 63.101
 libavcodec 58.117.101 / 58.117.101
 libavformat 58. 65.101 / 58. 65.101
 libavdevice 58. 11.103 / 58. 11.103
 libavfilter 7. 96.100 / 7. 96.100
 libswscale 5. 8.100 / 5. 8.100
 libswresample 3. 8.100 / 3. 8.100
 libpostproc 55. 8.100 / 55. 8.100
[video4linux2,v4l2 @ 0x2ea4600] ioctl(VIDIOC_STREAMON): Protocol error
/dev/video0: Protocol error



And when I'm swithing to
/dev/video1
(since this was also an output forv4l2-ctl --list-devices
), I get the following error message :

pi@raspberrypi:~ $ ffmpeg -re -f v4l2 -i /dev/video1 -f alsa -ac 1 -thread_queue_size 4096 -i hw:CARD=Camera,DEV=0 -profile:v high -level:v 4.1 -vcodec h264_omx -r 10 -b:v 512k -s 640x360 -acodec aac -strict -2 -ac 2 -ab 32k -ar 44100 -f flv rtmp://localhost/show/stream
ffmpeg version N-100673-g553eb07737 Copyright (c) 2000-2021 the FFmpeg developers
 built with gcc 8 (Raspbian 8.3.0-6+rpi1)
 configuration: --enable-gpl --enable-nonfree --enable-mmal --enable-omx-rpi --extra-ldflags=-latomic
 libavutil 56. 63.101 / 56. 63.101
 libavcodec 58.117.101 / 58.117.101
 libavformat 58. 65.101 / 58. 65.101
 libavdevice 58. 11.103 / 58. 11.103
 libavfilter 7. 96.100 / 7. 96.100
 libswscale 5. 8.100 / 5. 8.100
 libswresample 3. 8.100 / 3. 8.100
 libpostproc 55. 8.100 / 55. 8.100
[video4linux2,v4l2 @ 0x1aa4610] ioctl(VIDIOC_G_INPUT): Inappropriate ioctl for device
/dev/video1: Inappropriate ioctl for device



When using the
video0
input, the webcam's LED that recognizes an access is constantly on. When usingvideo1
not.

After hours and days of googling and tears and whiskey, for the sake of my liver, my marriage and my physical and mental health, I'm very sincerly asking for your help...
What the f**k is happening and what can I do to make it work ???


Thanks everybody :)


UPDATE 1 :


- 

- using the full path to ffmpeg does not change anything...
- /dev/video0 and /dev/video1 have access rights for everybody
sudo ffmpeg ...
does not change anything as well- the problem seems to be at an "early stage". Stripping the command down to
ffmpeg -i /dev/video0
results in the same problem










UPDATE 2 :

It seems that everything is working when I first start another Application that needs access to the webcam and then ffmpeg...
Might be some driver issue, but when I'm looking for loaded modules withlsmod
, there is absolutely no change before and after I started the application...
Any help still appreciated...

UPDATE 3 :

I was checking the output ofdmesg
.

When I started the first application I received this message :

uvcvideo: Failed to query (GET_DEF) UVC control 12 on unit 2: -32 (exp. 4).


And when I startedffmpeg
, nothing happend but everything worked...

-
Using FFmpeg with URL input causes SIGSEGV in AWS Lambda (Python runtime)
26 mars, par Dave94I'm trying to implement a video converting solution on AWS Lambda following their article named Processing user-generated content using AWS Lambda and FFmpeg.
However when I run my command with
subprocess.Popen()
it returns-11
which translates to SIGSEGV (segmentation fault).
I've tried to process the video with the newest (4.3.1) static build from John Van Sickle's site as with the "official" ffmpeg-lambda-layer but it seems like it doesn't matter which one I use, the result is the same.

If I download the video to the Lambda's
/tmp
directory and add this downloaded file as an input to FFmpeg it works correctly (with the same parameters). However I'm trying to prevent this as the/tmp
directory's max. size is only 512 MB which is not quite enough for me.

The relevant code which returns SIGSEGV :


ffmpeg_cmd = '/opt/bin/ffmpeg -stream_loop -1 -i "' + s3_source_signed_url + '" -i /opt/bin/audio.mp3 -i /opt/bin/watermark.png -shortest -y -deinterlace -vcodec libx264 -pix_fmt yuv420p -preset veryfast -r 30 -g 60 -b:v 4500k -c:a copy -map 0:v:0 -map 1:a:0 -filter_complex scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,setsar=1,overlay=(W-w)/2:(H-h)/2,format=yuv420p -loglevel verbose -f flv -'
command1 = shlex.split(ffmpeg_cmd)
p1 = subprocess.Popen(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p1.communicate()
print(p1.returncode) #prints -11



stderr of FFmpeg :


ffmpeg version 4.1.3-static https://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2019 the FFmpeg developers
 built with gcc 6.3.0 (Debian 6.3.0-18+deb9u1) 20170516
 configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc-6 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzvbi --enable-libzimg
 libavutil 56. 22.100 / 56. 22.100
 libavcodec 58. 35.100 / 58. 35.100
 libavformat 58. 20.100 / 58. 20.100
 libavdevice 58. 5.100 / 58. 5.100
 libavfilter 7. 40.101 / 7. 40.101
 libswscale 5. 3.100 / 5. 3.100
 libswresample 3. 3.100 / 3. 3.100
 libpostproc 55. 3.100 / 55. 3.100
[tcp @ 0x728cc00] Starting connection attempt to 52.219.74.177 port 443
[tcp @ 0x728cc00] Successfully connected to 52.219.74.177 port 443
[h264 @ 0x729b780] Reinit context to 1280x720, pix_fmt: yuv420p
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'https://bucket.s3.amazonaws.com --> presigned url with 15 min expiration time':
 Metadata:
 major_brand : mp42
 minor_version : 0
 compatible_brands: mp42mp41isomavc1
 creation_time : 2015-09-02T07:42:42.000000Z
 Duration: 00:00:15.64, start: 0.000000, bitrate: 2640 kb/s
 Stream #0:0(und): Video: h264 (High), 1 reference frame (avc1 / 0x31637661), yuv420p(tv, bt709, left), 1280x720 [SAR 1:1 DAR 16:9], 2475 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc (default)
 Metadata:
 creation_time : 2015-09-02T07:42:42.000000Z
 handler_name : L-SMASH Video Handler
 encoder : AVC Coding
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 160 kb/s (default)
 Metadata:
 creation_time : 2015-09-02T07:42:42.000000Z
 handler_name : L-SMASH Audio Handler
[mp3 @ 0x733f340] Skipping 0 bytes of junk at 1344.
Input #1, mp3, from '/opt/bin/audio.mp3':
 Metadata:
 encoded_by : Logic Pro X
 date : 2021-01-03
 coding_history : 
 time_reference : 158760000
 umid : 0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004500F9E4
 encoder : Lavf58.49.100
 Duration: 00:04:01.21, start: 0.025057, bitrate: 320 kb/s
 Stream #1:0: Audio: mp3, 44100 Hz, stereo, fltp, 320 kb/s
 Metadata:
 encoder : Lavc58.97
Input #2, png_pipe, from '/opt/bin/watermark.png':
 Duration: N/A, bitrate: N/A
 Stream #2:0: Video: png, 1 reference frame, rgba(pc), 701x190 [SAR 1521:1521 DAR 701:190], 25 tbr, 25 tbn, 25 tbc
[Parsed_scale_0 @ 0x7341140] w:1920 h:1080 flags:'bilinear' interl:0
Stream mapping:
 Stream #0:0 (h264) -> scale
 Stream #2:0 (png) -> overlay:overlay
 format -> Stream #0:0 (libx264)
 Stream #1:0 -> #0:1 (copy)
Press [q] to stop, [?] for help
[h264 @ 0x72d8600] Reinit context to 1280x720, pix_fmt: yuv420p
[Parsed_scale_0 @ 0x733c1c0] w:1920 h:1080 flags:'bilinear' interl:0
[graph 0 input from stream 0:0 @ 0x7669200] w:1280 h:720 pixfmt:yuv420p tb:1/25 fr:25/1 sar:1/1 sws_param:flags=2
[graph 0 input from stream 2:0 @ 0x766a980] w:701 h:190 pixfmt:rgba tb:1/25 fr:25/1 sar:1521/1521 sws_param:flags=2
[auto_scaler_0 @ 0x7670240] w:iw h:ih flags:'bilinear' interl:0
[deinterlace_in_2_0 @ 0x766b680] auto-inserting filter 'auto_scaler_0' between the filter 'graph 0 input from stream 2:0' and the filter 'deinterlace_in_2_0'
[Parsed_scale_0 @ 0x733c1c0] w:1280 h:720 fmt:yuv420p sar:1/1 -> w:1920 h:1080 fmt:yuv420p sar:1/1 flags:0x2
[Parsed_pad_1 @ 0x733ce00] w:1920 h:1080 -> w:1920 h:1080 x:0 y:0 color:0x000000FF
[Parsed_setsar_2 @ 0x733da00] w:1920 h:1080 sar:1/1 dar:16/9 -> sar:1/1 dar:16/9
[auto_scaler_0 @ 0x7670240] w:701 h:190 fmt:rgba sar:1521/1521 -> w:701 h:190 fmt:yuva420p sar:1/1 flags:0x2
[Parsed_overlay_3 @ 0x733e440] main w:1920 h:1080 fmt:yuv420p overlay w:701 h:190 fmt:yuva420p
[Parsed_overlay_3 @ 0x733e440] [framesync @ 0x733e5a8] Selected 1/50 time base
[Parsed_overlay_3 @ 0x733e440] [framesync @ 0x733e5a8] Sync level 2
[libx264 @ 0x72c1c00] using SAR=1/1
[libx264 @ 0x72c1c00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x72c1c00] profile Progressive High, level 4.0, 4:2:0, 8-bit
[libx264 @ 0x72c1c00] 264 - core 157 r2969 d4099dd - H.264/MPEG-4 AVC codec - Copyleft 2003-2019 - http://www.videolan.org/x264.html - options: cabac=1 ref=1 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=2 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=9 lookahead_threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=1 keyint=60 keyint_min=6 scenecut=40 intra_refresh=0 rc_lookahead=10 rc=abr mbtree=1 bitrate=4500 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, flv, to 'pipe:':
 Metadata:
 major_brand : mp42
 minor_version : 0
 compatible_brands: mp42mp41isomavc1
 encoder : Lavf58.20.100
 Stream #0:0: Video: h264 (libx264), 1 reference frame ([7][0][0][0] / 0x0007), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], q=-1--1, 4500 kb/s, 30 fps, 1k tbn, 30 tbc (default)
 Metadata:
 encoder : Lavc58.35.100 libx264
 Side data:
 cpb: bitrate max/min/avg: 0/0/4500000 buffer size: 0 vbv_delay: -1
 Stream #0:1: Audio: mp3 ([2][0][0][0] / 0x0002), 44100 Hz, stereo, fltp, 320 kb/s
 Metadata:
 encoder : Lavc58.97
frame= 27 fps=0.0 q=32.0 size= 247kB time=00:00:00.03 bitrate=59500.0kbits/s speed=0.0672x
frame= 77 fps= 77 q=27.0 size= 1115kB time=00:00:02.03 bitrate=4478.0kbits/s speed=2.03x
frame= 126 fps= 83 q=25.0 size= 2302kB time=00:00:04.00 bitrate=4712.4kbits/s speed=2.64x
frame= 177 fps= 87 q=26.0 size= 3576kB time=00:00:06.03 bitrate=4854.4kbits/s speed=2.97x
frame= 225 fps= 88 q=25.0 size= 4910kB time=00:00:07.96 bitrate=5047.8kbits/s speed=3.13x
frame= 272 fps= 89 q=27.0 size= 6189kB time=00:00:09.84 bitrate=5147.9kbits/s speed=3.22x
frame= 320 fps= 90 q=27.0 size= 7058kB time=00:00:11.78 bitrate=4907.5kbits/s speed=3.31x
frame= 372 fps= 91 q=26.0 size= 8098kB time=00:00:13.84 bitrate=4791.0kbits/s speed=3.4x



And that's the end of it. It should continue to do the processing until
00:04:02
as that's my audio's length but it stops here every time (approximately this is my video length).

The relevant code which works correctly :


ffmpeg_cmd = '/opt/bin/ffmpeg -stream_loop -1 -i "' + '/tmp/' + s3_source_key + '" -i /opt/bin/audio.mp3 -i /opt/bin/watermark.png -shortest -y -deinterlace -vcodec libx264 -pix_fmt yuv420p -preset veryfast -r 30 -g 60 -b:v 4500k -c:a copy -map 0:v:0 -map 1:a:0 -filter_complex scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,setsar=1,overlay=(W-w)/2:(H-h)/2,format=yuv420p -loglevel verbose -f flv -'
command1 = shlex.split(ffmpeg_cmd)
p1 = subprocess.Popen(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p1.communicate()
print(p1.returncode) #prints 0



With this code it repeats the video as many times as it has to do to be as long as the audio.


Both versions work correctly on my computer.


This question is almost the same but in my case FFmpeg is able to access the signed URL.