
Recherche avancée
Autres articles (29)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Other interesting software
13 avril 2011, parWe don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
We don’t know them, we didn’t try them, but you can take a peek.
Videopress
Website : http://videopress.com/
License : GNU/GPL v2
Source code : (...) -
Pas question de marché, de cloud etc...
10 avril 2011Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
sur le web 2.0 et dans les entreprises qui en vivent.
Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...)
Sur d’autres sites (7374)
-
Error in converting audio file format from ogg to wav [on hold]
9 juin 2014, par Sumit BishtI am trying to convert an ogg format file that was created using webrtc (html5 usermedia content generated on firefox) and transferred and decoded on the server into a wav file through ffmpeg but am getting this error on cmmand line while trying to convert :
$ ffmpeg -i 2014-6-5_16-17-54.ogg res1.wav
ffmpeg version 2.0.1 Copyright (c) 2000-2013 the FFmpeg developers
built on May 1 2014 13:12:12 with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-4)
configuration: --enable-gpl --enable-version3 --enable-shared --enable-nonfree --enable-postproc --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid
libavutil 52. 38.100 / 52. 38.100
libavcodec 55. 18.102 / 55. 18.102
libavformat 55. 12.100 / 55. 12.100
libavdevice 55. 3.100 / 55. 3.100
libavfilter 3. 79.101 / 3. 79.101
libswscale 2. 3.100 / 2. 3.100
libswresample 0. 17.102 / 0. 17.102
libpostproc 52. 3.100 / 52. 3.100
Guessed Channel Layout for Input Stream #0.0 : mono
Input #0, ogg, from '2014-6-5_16-17-54.ogg':
Duration: 00:00:01.84, start: 0.000000, bitrate: 18 kb/s
Stream #0:0: Audio: opus, 48000 Hz, mono
Metadata:
ENCODER : Mozilla29.0.1
[graph 0 input from stream 0:0 @ 0x18dca20] Invalid sample format (null)
Error opening filters!Although, I am able to play the file on server and using the same command, am able to convert .ogg files generated somewhere else. What might be I missing ?
Edit :
Here’s the source code that is used to write to the file :1) During startup - use the methods of getUserMedia API.
navigator.getUserMedia({
audio: true,
video: false
}, function(stream) {
audioStream = RecordRTC(stream, {
bufferSize: 16384
});
audioStream.startRecording();2) During stopping of the recording - extracting the recorded information.
function(audioDataURL) {
var audioFile = {};
audioFile = {
contents: audioDataURL
**strong text**};On server end, the following code is creating a file from this data :
dataURL = dataURL.split(',').pop(); // dataURL is the audioDataURL as defined above
fileBuffer = new Buffer(dataURL, 'base64');
fs.writeFileSync(filePath, fileBuffer); -
No audio output using FFmpeg
26 mars 2022, par John Mergene ArellanoI am having problem on Live stream output. I am streaming from mobile app to Node JS server to RTMP. Video output of the live stream is working but not the audio. There is no audio output from live stream.


From my client side, I am sending a stream using the Socket.IO library. I captured the video and audio using getUserMedia API.


navigator.mediaDevices.getUserMedia(constraints).then((stream) => {
 window.videoStream = video.srcObject = stream;
 let mediaRecorder = new MediaRecorder(stream, {
 videoBitsPerSecond : 3 * 1024 * 1024
 });
 mediaRecorder.addEventListener('dataavailable', (e) => {
 let data = e.data;
 socket.emit('live', data);
 });
 mediaRecorder.start(1000);
});



Then my server will receive the stream and write it to FFmpeg.


client.on('live', (stream)=>{
 if(ffmpeg)
 ffmpeg.stdin.write(stream);
});



I tried watching the live video in VLC media player. There is a 5 seconds delay and no audio output.


Please see below for FFmpeg options I used :


ffmpeg = this.CHILD_PROCESS.spawn("ffmpeg", [
 '-f',
 'lavfi',
 '-i', 'anullsrc',
 '-i','-',
 '-c:v', 'libx264', '-preset', 'veryfast', '-tune', 'zerolatency',
 '-c:a', 'aac', '-ar', '44100', '-b:a', '64k',
 '-y', //force to overwrite
 '-use_wallclock_as_timestamps', '1', // used for audio sync
 '-async', '1', // used for audio sync
 '-bufsize', '1000',
 '-f',
 'flv',
 `rtmp://127.0.0.1:1935/live/stream` ]);



What is wrong with my setup ? I need to fix the command so that the live stream will output both video and audio.


I tried streaming to youtube RTMP but still no audio. I am expecting to have an output of video and audio from the getUserMedia API.


What is wrong with my setup ? I need to fix the command so that the live stream will output both video and audio.


I tried streaming to youtube RTMP but still no audio. I am expecting to have an output of video and audio from the getUserMedia API.


-
ffmpeg legitimate decoding errors
20 juillet 2017, par Gideon OduroMy issue is as follows, i’m sending a H.264 encoded video captured with the help of WebRTC over WebSocket. The idea is to perform server side analysis and object tracking.
navigator.mediaDevices.getUserMedia(constraint).then((stream) => {
isVideoElement(target, stream)
mediaRecorder = recorder(stream, {mimeType: 'video/webm; codecs=H264'})
mediaRecorder.ondataavailable = (blob) => socket.send(blob.data)
mediaRecorder.start('2000');
})On the server side, data is being received as ByteBuffer :
override fun handleBinaryMessage(session: WebSocketSession, msg: BinaryMessage) {
analysis(msg.payload)
}Im using the following resources (resource_1, resource_2) to try to convert my ByteBuffer to a OpenCv frame :
fun startPreview(data: ByteBuffer) {
avcodec_register_all()
val pack = avcodec.AVPacket()
pack.data(BytePointer(data))
avcodec.av_init_packet(pack)
val videoData = BytePointer(data)
val codec = avcodec.avcodec_find_decoder(avcodec.AV_CODEC_ID_H264)
val videoCodecContext = avcodec.avcodec_alloc_context3(codec)
videoCodecContext.width(1280)
videoCodecContext.height(720)
videoCodecContext.pix_fmt(avutil.AV_PIX_FMT_YUV420P)
videoCodecContext.codec_type(avutil.AVMEDIA_TYPE_VIDEO)
videoCodecContext.extradata(videoData)
videoCodecContext.extradata_size(data.capacity())
videoCodecContext.flags2(videoCodecContext.flags2() or avcodec.CODEC_FLAG2_CHUNKS)
avcodec.avcodec_open2(videoCodecContext, codec, null as PointerPointer<*>?)
val decodedFrameLength = avcodec.avcodec_receive_frame(videoCodecContext, avutil.AVFrame())
println(decodedFrameLength)
}Im then receiving decodedFrameLength of -35 indicating a decoding error, cant figure out how to proceed from here ?