
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (92)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (8150)
-
Convert ffmpeg yuv420p AVFrame to CMSampleBufferRef (kcvPixelFormatType_420YpCbCr8BiPlanarFullRange)
21 juillet 2014, par user3272750I have a foscam ip camera and have access to the rtsp stream. I used DFURTSPPlayer to view the stream on my iOS device which works fine. I use a webrtc provider that lets me inject frames as CMSampleBufferRef in addition to directly reading from any of the on board cameras. I wish to use this to broadcast the IP camera stream over a secure webrtc session.
The main loop in the DFURTSPPLayer checks if a frame is available and then converts into UIimage and sets it to an imageview.
-(void)displayNextFrame:(NSTimer *)timer
{
NSTimeInterval startTime = [NSDate timeIntervalSinceReferenceDate];
if (![video stepFrame]) {
[timer invalidate];
[playButton setEnabled:YES];
[video closeAudio];
return;
}
imageView.image = video.currentImage;
float frameTime = 1.0/([NSDate timeIntervalSinceReferenceDate]-startTime);
if (lastFrameTime<0) {
lastFrameTime = frameTime;
} else {
lastFrameTime = LERP(frameTime, lastFrameTime, 0.8);
}
[label setText:[NSString stringWithFormat:@"%.0f",lastFrameTime]];
}I’m trying to do something similar, but instead of (or in addition to) setting the UIImage I wish to also inject the frames into my webrtc service. This is an example where they use an avcapturesession. I believe I could do something similar to the runloop here and inject the frame (provided I can convert the yuv420p AVFrame into a CMSampleBufferRef :
- (void) captureOutput:(AVCaptureOutput*) captureOutput
didOutputSampleBuffer:(CMSampleBufferRef) sampleBuffer
fromConnection:(AVCaptureConnection*) connection
{
self.videoFrame.frameBuffer = sampleBuffer;
// IMPORTANT: injectFrame expects a 420YpCbCr8BiPlanarFullRange and frame
// gets timestamped inside the service.
NSLog(@"videoframe buffer %@",self.videoFrame.frameBuffer);
[self.service injectFrame:self.videoFrame];
}Hence my question. Most of the questiosn on stack overflow involve going in the other direction (typically broadcasting on board camera input via rtsp). I’m a n00b as far as avfoundation/corevideo is concerned. I’m prepared to put in the groundwork if someone can suggest a path. Thanks in advance !
Edit : After reading some more on this, it seems that most important step is a conversion from 420p to 420f.
-
Seek in fragmented MP4
15 novembre 2020, par Stefan FalkFor my web-client I want the user to be able to play a track right away, without having to download the entire file. For this I am using a fragmented MP4 with the AAC audio coded (Mime-Type :
audio/mp4; codecs="mp4a.40.2"
).

This is the command that is being used in order to convert an input file to a fMP4 :


ffmpeg -i /tmp/input.any \
 -f mp4 \
 -movflags faststart+separate_moof+empty_moov+default_base_moof \
 -acodec aac -b:a 256000 \
 -frag_duration 500K \
 /tmp/output.mp4



If I look at this file on MP4Box.js, I see that the file is fragmented like this :


ftyp
moov
moof
mdat
moof
mdat
..
moof
mdat
mfra



This looks alright so far but the problem I am facing now is that it's not apparent to me how to start loading data from a specific timestamp without introducing an additional overhead. What I mean by this is that I need the exact byte offset of the first
[moof][mdat]
for a specific timestamp without the entire file being available.

Let's say I have a file that looks like this :


ftyp
moov
moof # 00:00
mdat 
moof # 00:01
mdat
moof # 00:02
mdat
moof # 00:03
mdat
mfra



This file however, is not available on my server directly, it is being loaded from another service, but the client wants to request packets starting at
00:02
.

Is there a way to do this efficiently without me having to load the entire file from the other service to my server ?


My guess would be to load
[ftyp][moov]
(or store at least this part on my own server) but as far as I know, the metadata stored in those boxes won't help me to find a byte-offset to the first[moof][mdat]
-pair.

Is this even possible or am I following the wrong approach here ?


-
Is there any ffmpeg command which could process on bytes [on hold]
18 juillet 2017, par IPSI have built RTSP service using c# code and it is useful to record videos using ffmpeg commands.
My confusion is i’m getting bytes data in my response not sure whether it has frames and packets.
So is there any way to pass bytes data into ffmpeg command to record video ?