
Recherche avancée
Médias (91)
-
Spitfire Parade - Crisis
15 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Wired NextMusic
14 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
-
Sintel MP4 Surround 5.1 Full
13 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (104)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.
Sur d’autres sites (7014)
-
Updated(reproducible) - Gaps when recording using MediaRecorder API(audio/webm opus)
25 mars 2019, par Jack Juiceson----- UPDATE HAS BEEN ADDED BELOW -----
I have an issue with MediaRecorder API (https://www.w3.org/TR/mediastream-recording/#mediarecorder-api).
I’m using it to record the speech from the web page(Chrome was used in this case) and save it as chunks.
I need to be able to play it while and after it is recorded, so it’s important to keep those chunks.Here is the code which is recording data :
navigator.mediaDevices.getUserMedia({ audio: true, video: false }).then(function(stream) {
recorder = new MediaRecorder(stream, { mimeType: 'audio/webm; codecs="opus"' })
recorder.ondataavailable = function(e) {
// Read blob from `e.data`, decode64 and send to sever;
}
recorder.start(1000)
})The issue is that the WebM file which I get when I concatenate all the parts is corrupted(rarely) !. I can play it as WebM, but when I try to convert it(ffmpeg) to something else, it gives me a file with shifted timings.
For example. I’m trying to convert a file which has duration
00:36:27.78
to wav, but I get a file with duration00:36:26.04
, which is 1.74s less.At the beginning of file - the audio is the same, but after about 10min WebM file plays with a small delay.
After some research, I found out that it also does not play correctly with the browser’s MediaSource API, which I use for playing the chunks. I tried 2 ways of playing those chunks :
In a case when I just merge all the parts into a single blob - it works fine.
In case when I add them via the sourceBuffer object, it has some gaps (i can see them by inspectingbuffered
property).
697.196 - 697.528 ( 330ms)
996.198 - 996.754 ( 550ms)
1597.16 - 1597.531 ( 370ms)
1896.893 - 1897.183 ( 290ms)Those gaps are 1.55s in total and they are exactly in the places where the desync between wav & webm files start. Unfortunately, the file where it is reproducible cannot be shared because it’s customer’s private data and I was not able to reproduce such issue on different media yet.
What can be the cause for such an issue ?
----- UPDATE -----
I was able to reproduce the issue on https://jsfiddle.net/96uj34nf/4/In order to see the problem, click on the "Print buffer zones" button and it will display time ranges. You can see that there are two gaps :
0 - 136.349, 141.388 - 195.439, 197.57 - 198.589- 136.349 - 141.388
- 195.439 - 197.57
So, as you can see there are 5 and 2 second gaps. Would be happy if someone could shed some light on why it is happening or how to avoid this issue.
Thank you
-
Gaps when recording using MediaRecorder API(audio/webm opus)
9 août 2018, par Jack Juiceson----- UPDATE HAS BEEN ADDED BELOW -----
I have an issue with MediaRecorder API (https://www.w3.org/TR/mediastream-recording/#mediarecorder-api).
I’m using it to record the speech from the web page(Chrome was used in this case) and save it as chunks.
I need to be able to play it while and after it is recorded, so it’s important to keep those chunks.Here is the code which is recording data :
navigator.mediaDevices.getUserMedia({ audio: true, video: false }).then(function(stream) {
recorder = new MediaRecorder(stream, { mimeType: 'audio/webm; codecs="opus"' })
recorder.ondataavailable = function(e) {
// Read blob from `e.data`, decode64 and send to sever;
}
recorder.start(1000)
})The issue is that the WebM file which I get when I concatenate all the parts is corrupted(rarely) !. I can play it as WebM, but when I try to convert it(ffmpeg) to something else, it gives me a file with shifted timings.
For example. I’m trying to convert a file which has duration
00:36:27.78
to wav, but I get a file with duration00:36:26.04
, which is 1.74s less.At the beginning of file - the audio is the same, but after about 10min WebM file plays with a small delay.
After some research, I found out that it also does not play correctly with the browser’s MediaSource API, which I use for playing the chunks. I tried 2 ways of playing those chunks :
In a case when I just merge all the parts into a single blob - it works fine.
In case when I add them via the sourceBuffer object, it has some gaps (i can see them by inspectingbuffered
property).
697.196 - 697.528 ( 330ms)
996.198 - 996.754 ( 550ms)
1597.16 - 1597.531 ( 370ms)
1896.893 - 1897.183 ( 290ms)Those gaps are 1.55s in total and they are exactly in the places where the desync between wav & webm files start. Unfortunately, the file where it is reproducible cannot be shared because it’s customer’s private data and I was not able to reproduce such issue on different media yet.
What can be the cause for such an issue ?
----- UPDATE -----
I was able to reproduce the issue on https://jsfiddle.net/96uj34nf/4/In order to see the problem, click on the "Print buffer zones" button and it will display time ranges. You can see that there are two gaps :
0 - 136.349, 141.388 - 195.439, 197.57 - 198.589- 136.349 - 141.388
- 195.439 - 197.57
So, as you can see there are 5 and 2 second gaps. Would be happy if someone could shed some light on why it is happening or how to avoid this issue.
Thank you
-
Remuxing mp4 on the fly with FFmpeg API
25 octobre 2015, par VadymMy goal is to stream out mpegts. As input, I take an mp4 file stream. That is, video producer writes mp4 file into a stream that I try to work with. The video may be anywhere from one minute to ten minutes. Because producer writes bytes into stream, the originally written mp4 header is not complete (first 32 bytes prior to ftyp are 0x00 because it doesn’t know yet various offsets... which are written post-recording, I think) :
This is how the header of typical mp4 looks like :
00 00 00 18 66 74 79 70 69 73 6f 6d 00 00 00 00 ....ftypisom....
69 73 6f 6d 33 67 70 34 00 01 bb 8c 6d 64 61 74 isom3gp4..»ŒmdatThis is how the header of "in progress" mp4 looks like :
00 00 00 00 66 74 79 70 69 73 6f 6d 00 00 00 00 ....ftypisom....
69 73 6f 6d 33 67 70 34 00 00 00 18 3f 3f 3f 3f isom3gp4....????
6d 64 61 74 mdatIt is my guess but I assume that once the producer completes recording, it updates the header by writing all the necessary offsets.
I have run into two issues while trying to make this work :
- I created a custom AVIO with read function that does not support seeking. In my driver program, I decided to stream in a properly formatted mp4 file. I am able to detect its input format. When I try to open it, I see that my custom read function gets executed within avformat_open_input until the entire file is read in.
My code sample :
av_register_all();
AVFormatContext* pCtx = avformat_alloc_context();
pCtx->pb = avio_alloc_context(
pBuffer, // internal buffer
iBufSize, // internal buffer size
0, // bWriteable (1=true,0=false)
stream, // user data ; will be passed to our callback functions
read_stream, // read callback function
NULL, // write callback function (not used in this example)
NULL // seek callback function
);
pCtx->pb->seekable = 0;
pCtx->pb->write_flag = 0;
pCtx->iformat = av_find_input_format( "mp4" );
pCtx->flags |= AVFMT_FLAG_CUSTOM_IO;
avformat_open_input( &pCtx, "", pCtx->iformat, NULL );Obviously, this does not work as I need (I expectations were wrong). Once I substitute the file of finite size by a stream of varies length, I cannot have avformat_open_input wait around for the stream to finish before attempting to do further processing.
As such, I need to find a way to open input without attempt to read it and only read when I execute av_read_frame. Is this at all possible to do by using custom AVIO. That is, prepare/open input -> read initial input data into input buffer -> read frame/packet from input buffer -> write packet to output -> repeat read input data until the end of stream.
I did scavenge google and only saw two alternatives : providing custom URLProtocol and using AVFMT_NOFILE.
Custom URLProtocol
This sounds like a little backwards way for what I’m trying to accomplish. I understand that it is best used when there is a file source available. Whereas I am trying to read from a byte stream. Also, another reason I think it doesn’t fit my needs is that custom URLProtocol needs to be compiled into ffmpeg lib, correct ? Or is there a way to manually register it during runtime ?AVFMT NOFILE
This seems like something that should actually work best for me. The flag itself says that there is no underlying source file and assumes that I will handle all the reading and provisioning of input data. The trouble is that I haven’t seen any online code snippets so far but my assumption is as follows :I am really hoping to get some suggestions of food for brain from anyone because I am a newbie to ffmpeg and digital media and my second issue expects that I can stream output while ingesting input.
-
As I mentioned above, I have a handle on the mp4 file bytestream as it would be written to the hard disk. The format is mp4 (h.264 and aac). I need to remux it to mpegts prior to streaming it out. This shouldn’t be difficult because mp4 and mpegts are simply containers. From what I learned so far, mp4 file looks the following :
[header info containing format versions]
mdat
[stream data, in my case h.264 and aac streams]
[some trailer separator]
[trailer data]
If that is correct, I should be able to get the handle on h.264 and aac interleaved data by simply starting to read the stream after "mdat" identifier, correct ?
If that is true and I decide to go with AVFMT_NOFILE approach of managing input data, I can just ingest stream data (into AVFormatContext buffer) -> av_read_frame -> process it -> populate AVFormatContext with more data -> av_read_frame -> and so on until the end of stream.
I know, this is a mouthful and a dump of my thoughts but I would appreciate any discussion, pointers, thoughts !