
Recherche avancée
Médias (1)
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (101)
-
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...) -
Gestion de la ferme
2 mars 2010, parLa ferme est gérée dans son ensemble par des "super admins".
Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
Dans un premier temps il utilise le plugin "Gestion de mutualisation"
Sur d’autres sites (3431)
-
Merging multiple videos in a template/layout with Python FFMPEG ?
14 janvier 2021, par J. M. ArnoldI'm currently trying to edit videos with the Python library of FFMPEG. I'm working with multiple file formats, precisely
.mp4
,.png
and text inputs (.txt
). The goal is to embed the different video files within a "layout" - for demonstration purposes I tried to design an example picture :



The output is supposed to be a 1920x1080
.mp4
file with the following Elements :

- 

- Element 3 is the video itself (due to it being a mobile phone screen recording, it's about the size displayed there)
- Element 1 and 2 are the "boarders", i.e. static pictures (?)
- Element 4 represents a regularly changing text - input through the python script (probably be read from a
.txt
file) - Element 5 portrays a
.png
,.svg
or alike ; in general a "picture" in the broad sense.










What I'm trying to achieve is to create a sort of template file in which I "just" need to input the different
.mp4
and.png
files, as well as the text and in the end I'll receive a.mp4
file whereas my Python script functions as the navigator sending the data packages to FFMPEG to process the video itself.

I dug into the FFMPEG library as well as the python-specific repository and wasn't able to find such an option. There were lots of articles explaining the usage of "channel layouts" (though these don't seem to fit my need).


In case anyone wants to try on the same versions :


- 

python --version
:
Python 3.7.3pip show ffmpeg
: Version : 1.4 (it's the most recent ; on an off-topic note : It's not obligatory to use FFMPEG, I'd prefer using this library though if it doesn't offer the functionality I'm looking for, I'd highly appreciate if someone suggested something else)






-
Converting a binary stream to an mpegts stream
22 décembre 2018, par John KimI’m trying to create a livestream web app using NodeJS. The code I currently have emits a raw binary stream from the webcam on the client using socket IO and the node server receives this raw data. Using fluent-ffmpeg, I want to encode this binary stream into mpegts and send it to an RTMP server in real time, without creating any intermediary files. Could I somehow convert the binary stream into a webm stream and pipe that stream into an mpegts encoder in one ffmpeg command ?
My relevant frontend client code :
navigator.mediaDevices.getUserMedia(constraints).then(function(stream) {
socket.emit('config_rtmpDestination',url);
socket.emit('start','start');
mediaRecorder = new MediaRecorder(stream);
mediaRecorder.start(2000);
mediaRecorder.onstop = function(e) {
stream.stop();
}
mediaRecorder.ondataavailable = function(e) {
socket.emit("binarystream",e.data);
}
}).catch(function(err) {
console.log('The following error occured: ' + err);
show_output('Local getUserMedia ERROR:'+err);
});Relevant NodeJS server code :
socket.on('binarystream',function(m){
feedStream(m);
});
socket.on('start',function(m){
...
var ops=[
'-vcodec', socket._vcodec,'-i','-',
'-c:v', 'libx264', '-preset', 'veryfast', '-tune', 'zerolatency',
'-an', '-bufsize', '1000',
'-f', 'mpegts', socket._rtmpDestination
];
ffmpeg_process=spawn('ffmpeg', ops);
feedStream=function(data){
ffmpeg_process.stdin.write(data);
}
...
}The above code of course doesn’t work, I get these errors on ffmpeg :
Error while decoding stream #0:1: Invalid data found when processing input
[NULL @ 000001b15e67bd80] Invalid sync code 61f192.
[libvpx @ 000001b15e6c5000] Failed to decode frame: Bitstream not supported by this decoderbecause I’m trying to convert raw binary data into mpegts.
-
WebRTC : unsync audio after processing using ffmpeg (audio length is less than that of video)
22 novembre 2013, par QuickSilverI am recording a video and using RecordRTC : WebRTC . After receiving the webm video and wav audio at server, I'm encoding it to a mp4 file using ffmpeg(executing shell command via php). But after encoding process, the audio is unsync with video (audio ends before video). How can I fix this ?
I have noticed that the recorded audio is 1 sec less in length with video.
js code is here
record.onclick = function() {
record.disabled = true;
var video_constraints = {
mandatory: {
"minWidth": "320",
"minHeight": "240",
"minFrameRate": "24",
"maxWidth": "320",
"maxHeight": "240",
"maxFrameRate": "24"
},
optional: []
};
navigator.getUserMedia({
audio: true,
video: video_constraints
}, function(stream) {
preview.src = window.URL.createObjectURL(stream);
preview.play();
// var legalBufferValues = [256, 512, 1024, 2048, 4096, 8192, 16384];
// sample-rates in at least the range 22050 to 96000.
recordAudio = RecordRTC(stream, {
/* extra important, we need to set a big buffer when capturing audio and video at the same time*/
bufferSize: 16384
//sampleRate: 45000
});
recordVideo = RecordRTC(stream, {
type: 'video'
});
recordVideo.startRecording();
recordAudio.startRecording();
stop.disabled = false;
recording_flag = true;
$("#divcounter").show();
$("#second-step-title").text('Record your video');
initCountdown();
uploadStatus.video = false;
uploadStatus.audio = false;
});
};ffmpeg command used is :
ffmpeg -y -i 166890589.wav -i 166890589.webm -vcodec libx264 166890589.mp4
Currently I'm adding an offset of -1 to ffmpeg, but i don't think it's right.
ffmpeg -y -itsoffset -1 -i 166890589.wav -i 166890589.webm -vcodec libx264 166890589.mp4