
Recherche avancée
Autres articles (51)
-
Gestion générale des documents
13 mai 2011, parMédiaSPIP ne modifie jamais le document original mis en ligne.
Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...) -
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Récupération d’informations sur le site maître à l’installation d’une instance
26 novembre 2010, parUtilité
Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)
Sur d’autres sites (8111)
-
Boto3 Video Upload 0 Bytes from Heroku
14 juillet 2017, par genghiskhanI have a small Flask api that takes a video and an image, overlays the image on the video and uploads the result to Amazon S3. I am using ffmpeg to do the actual overlaying. Here is that code :
command = "ffmpeg -i {0} -i {1} -filter_complex \"overlay=0:0\" {2}".format(background_name, overlay_name, output_name)
subprocess.getoutput(command)Then I simply upload it via Boto3 :
s3.upload_file(output_name, VIDEO_BUCKET_NAME, output_name)
This code works fine when I run on localhost ; however, when I test in while deployed to Heroku, it always uploads a file with 0 bytes. I suspect that it may be a problem with Heroku’s transient filesystem, but the file is being used immediately after it is created.
-
How can I mux a MKV and MKA file and get it to play in a browser ?
28 juin 2017, par RobertI’m using ffmpeg to merge .mkv and .mka files into .mp4 files. My current command looks like this :
ffmpeg -i video.mkv -i audio.mka output_path.mp4
The audio and video files are pre-signed urls from Amazon S3. Even on a server with sufficient resources, this process is going very slowly. I’ve researched situations where you can tell ffmpeg to skip re-encoding each frame, but I think that in my situation it actually does need to re-encode each frame.
I’ve downloaded 2 sample files to my macbook pro and have installed ffmpeg locally via homebrew. When I run the command
ffmpeg -i video.mkv -i audio.mka -c copy output.mp4
I get the following output :
ffmpeg version 3.3.2 Copyright (c) 2000-2017 the FFmpeg developers
built with Apple LLVM version 8.1.0 (clang-802.0.42)
configuration: --prefix=/usr/local/Cellar/ffmpeg/3.3.2 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-libmp3lame --enable-libx264 --enable-libxvid --enable-opencl --disable-lzma --enable-vda
libavutil 55. 58.100 / 55. 58.100
libavcodec 57. 89.100 / 57. 89.100
libavformat 57. 71.100 / 57. 71.100
libavdevice 57. 6.100 / 57. 6.100
libavfilter 6. 82.100 / 6. 82.100
libavresample 3. 5. 0 / 3. 5. 0
libswscale 4. 6.100 / 4. 6.100
libswresample 2. 7.100 / 2. 7.100
libpostproc 54. 5.100 / 54. 5.100
Input #0, matroska,webm, from '319_audio_1498590673766.mka':
Metadata:
encoder : GStreamer matroskamux version 1.8.1.1
creation_time : 2017-06-27T19:10:58.000000Z
Duration: 00:00:03.53, start: 2.831000, bitrate: 50 kb/s
Stream #0:0(eng): Audio: opus, 48000 Hz, stereo, fltp (default)
Metadata:
title : Audio
Input #1, matroska,webm, from '319_video_1498590673766.mkv':
Metadata:
encoder : GStreamer matroskamux version 1.8.1.1
creation_time : 2017-06-27T19:10:58.000000Z
Duration: 00:00:03.97, start: 2.851000, bitrate: 224 kb/s
Stream #1:0(eng): Video: vp8, yuv420p(progressive), 640x480, SAR 1:1 DAR 4:3, 30 tbr, 1k tbn, 1k tbc (default)
Metadata:
title : Video
[mp4 @ 0x7fa4f0806800] Could not find tag for codec vp8 in stream #0, codec not currently supported in container
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Stream mapping:
Stream #1:0 -> #0:0 (copy)
Stream #0:0 -> #0:1 (copy)
Last message repeated 1 timesSo it appears that the specific encodings I’m working with are vp8 videos and opus audio files, which I believe are incompatible with the .mp4 output container. I would appreciate answers that cover ways of optimally merging vp8 and opus into .mp4 output or answers that point me in the direction of output media formats that are both compatible with vp8 & opus and are playable on web and mobile devices so that I can bypass the re-encoding step altogether.
EDIT :
Just wanted to provide a benchmark after following LordNeckbeard’s advice :
4 min 41 second video transcoded locally on my mac
LordNeckbeard’s approach : 15 mins 55 seconds (955 seconds)
Current approach : 18 mins 49 seconds (1129 seconds)
18% speed increase -
How to set the destination folder of a Node.js fluent-ffmpeg screenshot to your AWS S3 bucket using getSignedUrl() ?
10 juillet 2017, par Madhavi MohoniI’m writing a program to generate .png thumbnails (with the same name, in the same folder) for a set of .mp4 videos in my Amazon S3 bucket. For this example, I’m going to create a /folder/file.png for a /folder/file.mp4 in the bucket. I’ve managed to set the source URL using the s3 object and getSignedUrl as follows :
var srcurl = s3.getSignedUrl('getObject', {
Bucket: 'bucket-name',
Key: '/folder/file.mp4'
});and
new ffmpeg({ source: srcurl })
.screenshots({
count: 1,
filename: '%f'.substr(0, '%f'.indexOf('.')) + '.png',
/* To shorten the long string that's returned */
folder: desturl,
size: MAX_WIDTH + 'x' + MAX_HEIGHT
});The destination URL has to be the same folder as the source. So I set it as follows :
var desturl = s3.getSignedUrl('putObject', {
Bucket: 'bucket-name',
Key: '/folder/file' + '.png'
});This combination doesn’t work - is there a way to do this correctly ?