
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (53)
-
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (7332)
-
Live audio using ffmpeg, javascript and nodejs
8 novembre 2017, par klausI am new to this thing. Please don’t hang me for the poor grammar. I am trying to create a proof of concept application which I will later extend. It does the following : We have a html page which asks for permission to use the microphone. We capture the microphone input and send it via websocket to a node js app.
JS (Client) :
var bufferSize = 4096;
var socket = new WebSocket(URL);
var myPCMProcessingNode = context.createScriptProcessor(bufferSize, 1, 1);
myPCMProcessingNode.onaudioprocess = function(e) {
var input = e.inputBuffer.getChannelData(0);
socket.send(convertFloat32ToInt16(input));
}
function convertFloat32ToInt16(buffer) {
l = buffer.length;
buf = new Int16Array(l);
while (l--) {
buf[l] = Math.min(1, buffer[l])*0x7FFF;
}
return buf.buffer;
}
navigator.mediaDevices.getUserMedia({audio:true, video:false})
.then(function(stream){
var microphone = context.createMediaStreamSource(stream);
microphone.connect(myPCMProcessingNode);
myPCMProcessingNode.connect(context.destination);
})
.catch(function(e){});In the server we take each incoming buffer, run it through ffmpeg, and send what comes out of the std out to another device using the node js ’http’ POST. The device has a speaker. We are basically trying to create a 1 way audio link from the browser to the device.
Node JS (Server) :
var WebSocketServer = require('websocket').server;
var http = require('http');
var children = require('child_process');
wsServer.on('request', function(request) {
var connection = request.accept(null, request.origin);
connection.on('message', function(message) {
if (message.type === 'utf8') { /*NOP*/ }
else if (message.type === 'binary') {
ffm.stdin.write(message.binaryData);
}
});
connection.on('close', function(reasonCode, description) {});
connection.on('error', function(error) {});
});
var ffm = children.spawn(
'./ffmpeg.exe'
,'-stdin -f s16le -ar 48k -ac 2 -i pipe:0 -acodec pcm_u8 -ar 48000 -f aiff pipe:1'.split(' ')
);
ffm.on('exit',function(code,signal){});
ffm.stdout.on('data', (data) => {
req.write(data);
});
var options = {
host: 'xxx.xxx.xxx.xxx',
port: xxxx,
path: '/path/to/service/on/device',
method: 'POST',
headers: {
'Content-Type': 'application/octet-stream',
'Content-Length': 0,
'Authorization' : 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
'Transfer-Encoding' : 'chunked',
'Connection': 'keep-alive'
}
};
var req = http.request(options, function(res) {});The device supports only continuous POST and only a couple of formats (ulaw, aiff, wav)
This solution doesn’t seem to work. In the device speaker we only hear something like white noise.
Also, I think I may have a problem with the buffer I am sending to the ffmpeg std in -> Tried to dump whatever comes out of the websocket to a .wav file then play it with VLC -> it plays everything in the record very fast -> 10 seconds of recording played in about 1 second.
I am new to audio processing and have searched for about 3 days now for solutions on how to improve this and found nothing.
I would ask from the community for 2 things :
-
Is something wrong with my approach ? What more can I do to make this work ? I will post more details if required.
-
If what I am doing is reinventing the wheel then I would like to know what other software / 3rd party service (like amazon or whatever) can accomplish the same thing.
Thank you.
-
-
AWS Lambda making video thumbnails
9 décembre 2017, par JesusI want make thumbnails from videos uploaded to S3, I know how to make it with Node.js and ffmpeg.
According to this forum post I can add libraries :
ImageMagick is the only external library that is currently provided by
default, but you can include any additional dependencies in the zip
file you provide when you create a Lambda function. Note that if this
is a native library or executable, you will need to ensure that it
runs on Amazon Linux.But how can I put static ffmpeg binary on aws lambda ?
And how can I call from Node.js this static binary (ffmpeg) with AWS Lambda ?
I’m newbie with amazon AWS and Linux
Can anyone help me ?
-
MP4Box / FFMPEG concat loses audio after first clip
17 novembre 2017, par user1615343So I am certainly no expert when it comes to either of these tools, but I have a web-based project that’s executing commands on an Amazon Linux server to concatenate two video files that are uploaded.
Both files are converted to mp4s first using FFMPEG, and those play perfectly in a browser after conversion :
ffmpeg -i file1.mpg -c:v libx264 -crf 22 -c:a aac -strict -2 -movflags faststart file2.mp4
Then, I attempt to combine these two resulting mp4s into a single mp4. I tried using FFMPEG to do this but to no avail. Switching to try MP4Box got me much closer : the videos are concatenated together, but the audio stops playing at the end of the first clip, and the second clip is silent.
MP4Box -force-cat -keepsys -add file.mp4 -cat file2.mp4 out.mp4
I’ve tried varying versions of the above command with no better results. Any input is greatly appreciated.
EDIT : info on .mp4 files using
ffmpeg -i file1.mp4 -i file2.mp4
ffmpeg -i 1510189259715DogRunsintoGlassDoor_315a03a8e20acfc.mp4 -i
1510189273549NewhouseMoonMoonneverseenstairsbeforefunnydog_285a03a8e6aab25.mp4ffmpeg version N-61041-g52a2138 Copyright (c) 2000-2014 the FFmpeg
developersbuilt on Mar 2 2014 05:45:04 with gcc 4.6 (Debian 4.6.3-1)
configuration : —prefix=/root/ffmpeg-static/64bit
—extra-cflags=’-I/root/ffmpeg-static/64bit/include -static’ —extra-ldflags=’-L/root/ffmpeg-static/64bit/lib -static’ —extra-libs=’-lxml2 -lexpat -lfreetype’ —enable-static —disable-shared —disable-ffserver —disable-doc —enable-bzlib —enable-zlib —enable-postproc —enable-runtime-cpudetect —enable-libx264 —enable-gpl —enable-libtheora —enable-libvorbis —enable-libmp3lame —enable-gray —enable-libass —enable-libfreetype —enable-libopenjpeg —enable-libspeex —enable-libvo-aacenc —enable-libvo-amrwbenc —enable-version3 —enable-libvpxlibavutil 52. 66.100 / 52. 66.100
libavcodec 55. 52.102 / 55. 52.102
libavformat 55. 33.100 / 55. 33.100
libavdevice 55. 10.100 / 55. 10.100
libavfilter 4. 2.100 / 4. 2.100
libswscale 2. 5.101 / 2. 5.101
libswresample 0. 18.100 / 0. 18.100
libpostproc 52. 3.100 / 52. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from
’1510189259715DogRunsintoGlassDoor_315a03a8e20acfc.mp4’ :Metadata :
major_brand : isom
minor_version : 512
compatible_brands : isomiso2avc1mp41
encoder : Lavf55.33.100
Duration : 00:00:04.92, start : 0.023220, bitrate : 634 kb/s
Stream #0:0(und) : Video : h264 (High) (avc1 / 0x31637661), yuv420p,
360x360 [SAR 1:1 DAR 1:1], 501 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc
(default)Metadata :
handler_name : VideoHandler
Stream #0:1(und) : Audio : aac (mp4a / 0x6134706D), 44100 Hz, mono,
fltp, 132 kb/s (default)Metadata :
handler_name : SoundHandler
Input #1, mov,mp4,m4a,3gp,3g2,mj2, from
’1510189273549NewhouseMoonMoonneverseenstairsbeforefunnydog_285a03a8e6aab25.mp4’ :Metadata :
major_brand : isom
minor_version : 512
compatible_brands : isomiso2avc1mp41
encoder : Lavf55.33.100
Duration : 00:00:18.79, start : 0.023220, bitrate : 455 kb/s
Stream #1:0(und) : Video : h264 (High) (avc1 / 0x31637661), yuv420p,
362x360 [SAR 1:1 DAR 181:180], 320 kb/s, 29.94 fps, 29.94 tbr, 11976
tbn, 59.88 tbc (default)Metadata :
handler_name : VideoHandler
Stream #1:1(eng) : Audio : aac (mp4a / 0x6134706D), 44100 Hz, stereo,
fltp, 129 kb/s (default)Metadata :
handler_name : SoundHandler
At least one output file must be specified