
Recherche avancée
Autres articles (73)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (8615)
-
FFmpeg static keyframe rate
7 novembre 2011, par 2diI have a question about ffmpeg usage. Every time when I trying to convert video files into
some different format, output file getting static keyframe sequence.What I mean is that keyframes appear at the distance of 12 frames. I know that its controllerd by parameter -g that you can change to any other number.
ffmpeg -i 1.avi -vcodec mpeg4 -b 2000000 out.avi
I believe there should be some way to make keyframes appear on uneven intervals. These interval should be calculated by codec, and it should be based on image changes in the video file. So keyframes should be inserted only when they needed, but not consistently after N frames.
Can somebody please explain to me how this "smart" encoding can be done with ffmpeg ?
Thank youSOLUTION : ok what I'ev been looking for has very simple solution. If you set -g to zero, ffmpeg will choose keyframes based on the video shots and bitrate
-
Webcam stream with FFMpeg on iPhone
6 décembre 2011, par SaphrositI'm trying to send and show a webcam stream from a linux server to an iPhone app. I don't know if it's the best solution, but I downloaded and installed FFMpeg on the linux server (following, for those who want to know, this tutorial).
FFMpeg is working fine. After a lots of wandering, I managed to send a stream to the client launchingffmpeg -s 320x240 -f video4linux2 -i /dev/video0 -f mpegts -vcodec libx264 udp://192.168.1.34:1234
where 192.168.1.34 is the address of the client. Actually the client is a Mac, but it is supposed to be an iPhone. I know the stream is sent and received correctly (tested in different ways).
However I didn't managed to watch the stream directly on the iPhone.
I thought of different (possible) solutions :-
first solution : store incoming data in a
NSMutableData
object. Then, when the stream ends, store it and then play it using aMPMoviePlayerController
. Here's the code :[video writeToFile:@"videoStream.m4v" atomically:YES];
NSURL *url = [NSURL fileURLWithPath:@"videoStream.m4v"];
MPMoviePlayerController *videoController = [[MPMoviePlayerController alloc] initWithContentURL:url];
[videoController.view setFrame:CGRectMake(100, 100, 150, 150)];
[self.view addSubview:videoController.view];
[videoController play];the problem of this solution is that nothing is played (I only see a black square), even if the video is saved correctly (I can play it directly from my disk using VLC). Besides, it's not such a great idea. It's just to make things work.
-
Second solution : use
CMSampleBufferRef
to store the incoming video. Much more problems comes with this solution : first of all, there's noCoreMedia.framework
in my system. Besides I do not get well what does this class represents and what should I do to make it works : I mean if I start (somehow) filling this "SampleBuffer" with bytes I receive from UDP connection, then it will automatically call theCMSampleBufferMakeDataReadyCallback
function I set during creation ? If yes, when ? When the single frame is completed or when the whole stream is received ? -
Third solution : use
AVFoundation
framework (neither this is actually available on my Mac). I did not understand if it's actually possible to start recording from a remote source or even from aNSMutableData
, achar*
or something like that. OnAVFoundation Programming Guide
I didn't find any reference that say if it's possible or not.
I don't know which one of this solution is the best for my purpose. ANY suggestion would be appreciate.
Besides, there's also another problem : I didn't use any segmenter program to send the video. Now, if I'm not getting wrong, segmenter needs to split the source video in smaller/shorter video easier to send. If it is right, then maybe it's not strictly necessary to make things work (may be added later). However, since the server is running under linux, I cannot use Apple's mediastreamsegmeter. May someone suggest an opensource segmenter to use in association with FFMpeg ?
UPDATE : I edited my question adding more informations on what I did since now and what my doubts are.
-
-
Make video frames from a livestream identifiable across multiple clients
23 septembre 2016, par mschwaigI need to distribute a video stream from a live source to several clients with the additional requirement that each frame is identifiable across all clients.
I have already done research into the topic, and I have arrived at a possible solution that I can share. My solution seems suboptimal and this is my first experience of working with video streams, so I want to see if somebody knows a better way.
The reason why I need to be able to identify specific frames within the video stream is that the streaming clients need to be able to talk about the time differences between events each of them identifies in their video stream.
A little clarifying example
I want to enable the following interaction :
- Two client applications Dewey and Stevie connect to the streaming server
- Dewey displays the stream and Stevie saves it to disk
- Dewey identifies a specific video frame that is of interest to Stevie, so he wants to tell Stevie about it
- Dewey extracts some identifying information from the video frame and sends it to Stevie
- Stevie uses the identifying information to extract the same frame from the copy of the livestream he is currently saving
Dewey cannot send the frame to Stevie directly, because Malcolm and Reese also want to tell him about specific video frames and Stevie is interested in the time difference between their findings.
Suggested solution
The solution that I found was using ffserver to broadcast a RTP stream and use the timestamps from the RTCP packets to identify frames. These timestamps are normally used to synchronize audio and video, and not to provide a shared timeline across several clients, which is why I am skeptical this is the best way to solve my problem.
It also seems beneficial to have frame numbers, like an increasing counter of frames instead of arbitrary timestamps which increase by some perhaps varying offset as for my application I also have to reference neighboring frames and it seems easier to compute time differences from frame numbers, than the other way around.