
Recherche avancée
Autres articles (111)
-
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
Script d’installation automatique de MediaSPIP
25 avril 2011, parAfin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
La documentation de l’utilisation du script d’installation (...)
Sur d’autres sites (4007)
-
Using FFmpeg with URL input causes SIGSEGV in AWS Lambda (Python runtime)
26 mars, par Dave94I'm trying to implement a video converting solution on AWS Lambda following their article named Processing user-generated content using AWS Lambda and FFmpeg.
However when I run my command with
subprocess.Popen()
it returns-11
which translates to SIGSEGV (segmentation fault).
I've tried to process the video with the newest (4.3.1) static build from John Van Sickle's site as with the "official" ffmpeg-lambda-layer but it seems like it doesn't matter which one I use, the result is the same.

If I download the video to the Lambda's
/tmp
directory and add this downloaded file as an input to FFmpeg it works correctly (with the same parameters). However I'm trying to prevent this as the/tmp
directory's max. size is only 512 MB which is not quite enough for me.

The relevant code which returns SIGSEGV :


ffmpeg_cmd = '/opt/bin/ffmpeg -stream_loop -1 -i "' + s3_source_signed_url + '" -i /opt/bin/audio.mp3 -i /opt/bin/watermark.png -shortest -y -deinterlace -vcodec libx264 -pix_fmt yuv420p -preset veryfast -r 30 -g 60 -b:v 4500k -c:a copy -map 0:v:0 -map 1:a:0 -filter_complex scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,setsar=1,overlay=(W-w)/2:(H-h)/2,format=yuv420p -loglevel verbose -f flv -'
command1 = shlex.split(ffmpeg_cmd)
p1 = subprocess.Popen(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p1.communicate()
print(p1.returncode) #prints -11



stderr of FFmpeg :


ffmpeg version 4.1.3-static https://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2019 the FFmpeg developers
 built with gcc 6.3.0 (Debian 6.3.0-18+deb9u1) 20170516
 configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc-6 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzvbi --enable-libzimg
 libavutil 56. 22.100 / 56. 22.100
 libavcodec 58. 35.100 / 58. 35.100
 libavformat 58. 20.100 / 58. 20.100
 libavdevice 58. 5.100 / 58. 5.100
 libavfilter 7. 40.101 / 7. 40.101
 libswscale 5. 3.100 / 5. 3.100
 libswresample 3. 3.100 / 3. 3.100
 libpostproc 55. 3.100 / 55. 3.100
[tcp @ 0x728cc00] Starting connection attempt to 52.219.74.177 port 443
[tcp @ 0x728cc00] Successfully connected to 52.219.74.177 port 443
[h264 @ 0x729b780] Reinit context to 1280x720, pix_fmt: yuv420p
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'https://bucket.s3.amazonaws.com --> presigned url with 15 min expiration time':
 Metadata:
 major_brand : mp42
 minor_version : 0
 compatible_brands: mp42mp41isomavc1
 creation_time : 2015-09-02T07:42:42.000000Z
 Duration: 00:00:15.64, start: 0.000000, bitrate: 2640 kb/s
 Stream #0:0(und): Video: h264 (High), 1 reference frame (avc1 / 0x31637661), yuv420p(tv, bt709, left), 1280x720 [SAR 1:1 DAR 16:9], 2475 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc (default)
 Metadata:
 creation_time : 2015-09-02T07:42:42.000000Z
 handler_name : L-SMASH Video Handler
 encoder : AVC Coding
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 160 kb/s (default)
 Metadata:
 creation_time : 2015-09-02T07:42:42.000000Z
 handler_name : L-SMASH Audio Handler
[mp3 @ 0x733f340] Skipping 0 bytes of junk at 1344.
Input #1, mp3, from '/opt/bin/audio.mp3':
 Metadata:
 encoded_by : Logic Pro X
 date : 2021-01-03
 coding_history : 
 time_reference : 158760000
 umid : 0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004500F9E4
 encoder : Lavf58.49.100
 Duration: 00:04:01.21, start: 0.025057, bitrate: 320 kb/s
 Stream #1:0: Audio: mp3, 44100 Hz, stereo, fltp, 320 kb/s
 Metadata:
 encoder : Lavc58.97
Input #2, png_pipe, from '/opt/bin/watermark.png':
 Duration: N/A, bitrate: N/A
 Stream #2:0: Video: png, 1 reference frame, rgba(pc), 701x190 [SAR 1521:1521 DAR 701:190], 25 tbr, 25 tbn, 25 tbc
[Parsed_scale_0 @ 0x7341140] w:1920 h:1080 flags:'bilinear' interl:0
Stream mapping:
 Stream #0:0 (h264) -> scale
 Stream #2:0 (png) -> overlay:overlay
 format -> Stream #0:0 (libx264)
 Stream #1:0 -> #0:1 (copy)
Press [q] to stop, [?] for help
[h264 @ 0x72d8600] Reinit context to 1280x720, pix_fmt: yuv420p
[Parsed_scale_0 @ 0x733c1c0] w:1920 h:1080 flags:'bilinear' interl:0
[graph 0 input from stream 0:0 @ 0x7669200] w:1280 h:720 pixfmt:yuv420p tb:1/25 fr:25/1 sar:1/1 sws_param:flags=2
[graph 0 input from stream 2:0 @ 0x766a980] w:701 h:190 pixfmt:rgba tb:1/25 fr:25/1 sar:1521/1521 sws_param:flags=2
[auto_scaler_0 @ 0x7670240] w:iw h:ih flags:'bilinear' interl:0
[deinterlace_in_2_0 @ 0x766b680] auto-inserting filter 'auto_scaler_0' between the filter 'graph 0 input from stream 2:0' and the filter 'deinterlace_in_2_0'
[Parsed_scale_0 @ 0x733c1c0] w:1280 h:720 fmt:yuv420p sar:1/1 -> w:1920 h:1080 fmt:yuv420p sar:1/1 flags:0x2
[Parsed_pad_1 @ 0x733ce00] w:1920 h:1080 -> w:1920 h:1080 x:0 y:0 color:0x000000FF
[Parsed_setsar_2 @ 0x733da00] w:1920 h:1080 sar:1/1 dar:16/9 -> sar:1/1 dar:16/9
[auto_scaler_0 @ 0x7670240] w:701 h:190 fmt:rgba sar:1521/1521 -> w:701 h:190 fmt:yuva420p sar:1/1 flags:0x2
[Parsed_overlay_3 @ 0x733e440] main w:1920 h:1080 fmt:yuv420p overlay w:701 h:190 fmt:yuva420p
[Parsed_overlay_3 @ 0x733e440] [framesync @ 0x733e5a8] Selected 1/50 time base
[Parsed_overlay_3 @ 0x733e440] [framesync @ 0x733e5a8] Sync level 2
[libx264 @ 0x72c1c00] using SAR=1/1
[libx264 @ 0x72c1c00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x72c1c00] profile Progressive High, level 4.0, 4:2:0, 8-bit
[libx264 @ 0x72c1c00] 264 - core 157 r2969 d4099dd - H.264/MPEG-4 AVC codec - Copyleft 2003-2019 - http://www.videolan.org/x264.html - options: cabac=1 ref=1 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=2 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=9 lookahead_threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=1 keyint=60 keyint_min=6 scenecut=40 intra_refresh=0 rc_lookahead=10 rc=abr mbtree=1 bitrate=4500 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, flv, to 'pipe:':
 Metadata:
 major_brand : mp42
 minor_version : 0
 compatible_brands: mp42mp41isomavc1
 encoder : Lavf58.20.100
 Stream #0:0: Video: h264 (libx264), 1 reference frame ([7][0][0][0] / 0x0007), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], q=-1--1, 4500 kb/s, 30 fps, 1k tbn, 30 tbc (default)
 Metadata:
 encoder : Lavc58.35.100 libx264
 Side data:
 cpb: bitrate max/min/avg: 0/0/4500000 buffer size: 0 vbv_delay: -1
 Stream #0:1: Audio: mp3 ([2][0][0][0] / 0x0002), 44100 Hz, stereo, fltp, 320 kb/s
 Metadata:
 encoder : Lavc58.97
frame= 27 fps=0.0 q=32.0 size= 247kB time=00:00:00.03 bitrate=59500.0kbits/s speed=0.0672x
frame= 77 fps= 77 q=27.0 size= 1115kB time=00:00:02.03 bitrate=4478.0kbits/s speed=2.03x
frame= 126 fps= 83 q=25.0 size= 2302kB time=00:00:04.00 bitrate=4712.4kbits/s speed=2.64x
frame= 177 fps= 87 q=26.0 size= 3576kB time=00:00:06.03 bitrate=4854.4kbits/s speed=2.97x
frame= 225 fps= 88 q=25.0 size= 4910kB time=00:00:07.96 bitrate=5047.8kbits/s speed=3.13x
frame= 272 fps= 89 q=27.0 size= 6189kB time=00:00:09.84 bitrate=5147.9kbits/s speed=3.22x
frame= 320 fps= 90 q=27.0 size= 7058kB time=00:00:11.78 bitrate=4907.5kbits/s speed=3.31x
frame= 372 fps= 91 q=26.0 size= 8098kB time=00:00:13.84 bitrate=4791.0kbits/s speed=3.4x



And that's the end of it. It should continue to do the processing until
00:04:02
as that's my audio's length but it stops here every time (approximately this is my video length).

The relevant code which works correctly :


ffmpeg_cmd = '/opt/bin/ffmpeg -stream_loop -1 -i "' + '/tmp/' + s3_source_key + '" -i /opt/bin/audio.mp3 -i /opt/bin/watermark.png -shortest -y -deinterlace -vcodec libx264 -pix_fmt yuv420p -preset veryfast -r 30 -g 60 -b:v 4500k -c:a copy -map 0:v:0 -map 1:a:0 -filter_complex scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,setsar=1,overlay=(W-w)/2:(H-h)/2,format=yuv420p -loglevel verbose -f flv -'
command1 = shlex.split(ffmpeg_cmd)
p1 = subprocess.Popen(command1, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p1.communicate()
print(p1.returncode) #prints 0



With this code it repeats the video as many times as it has to do to be as long as the audio.


Both versions work correctly on my computer.


This question is almost the same but in my case FFmpeg is able to access the signed URL.


-
Carrierwave video not being processsed before uploading to S3
12 décembre 2013, par CrampsI'm using Carrierwave, Carrierwave-video and Carrierwave-video-thumbnailer to process videos and make a thumbnail when they're uploaded. This was all working nicely while I was saving the files on my file system. However, now that I've added uploading to Amazon S3 using the carrierwave-aws gem, the videos are being uploaded to S3 without being processed first. It's as if the
process encode_video
andversion :thumb
are being skipped by the uploader.Here's what was working for me at first (before adding S3) :
class VideoUploader < CarrierWave::Uploader::Base
include CarrierWave::Video
include CarrierWave::Video::Thumbnailer
storage :file
def store_dir
"upload/path/"
end
process encode_video: [{ bunch of video options}]
version :thumb do
process thumbnail: [{ bunch of thumbnailer options }]
def full_filename for_file
png_name for_file, version_name
end
end
def png_name for_file, version_name
%Q{#{version_name}_#{for_file.chomp(File.extname(for_file))}.png}
endNow it's really just the same, except it's using
storage :aws
instead. -
WebRTC books – a brief review
30 décembre 2013, par silviaI just finished reading Rob Manson’s awesome book “Getting Started with WebRTC” and I can highly recommend it for any Web developer who is interested in WebRTC.
Rob explains very clearly how to create your first video, audio or data peer-connection using WebRTC in current Google Chrome or Firefox (I think it also now applies to Opera, though that wasn’t the case when his book was published). He makes available example code, so you can replicate it in your own Web application easily, including the setup of a signalling server. He also points out that you need a ICE (STUN/TURN) server to punch through firewalls and gives recommendations for what software is available, but stops short of explaining how to set them up.
Rob’s focus is very much on the features required in a typical Web application :
- video calls
- audio calls
- text chats
- file sharing
In fact, he provides the most in-depth demo of how to set up a good file sharing interface I have come across.
Rob then also extends his introduction to WebRTC to two key application areas : education and team communication. His recommendations are spot on and required reading for anyone developing applications in these spaces.
—
Before Rob’s book, I have also read Alan Johnson and Dan Burnett’s “WebRTC” book on APIs and RTCWEB protocols of the HTML5 Real-Time Web.
Alan and Dan’s book was written more than a year ago and explains that state of standardisation at that time. It’s probably a little out-dated now, but it still gives you good foundations on why some decisions were made the way they are and what are contentious issues (some of which still remain). If you really want to understand what happens behind the scenes when you call certain functions in the WebRTC APIs of browsers, then this is for you.
Alan and Dan’s book explains in more details than Rob’s book how IP addresses of communication partners are found, how firewall holepunching works, how sessions get negotiated, and how the standards process works. It’s probably less useful to a Web developer who just wants to implement video call functionality into their Web application, though if something goes wrong you may find yourself digging into the details of SDP, SRTP, DTLS, and other cryptic abbreviations of protocols that all need to work together to get a WebRTC call working.
—
Overall, both books are worthwhile and cover different aspects of WebRTC that you will stumble across if you are directly dealing with WebRTC code.