
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (45)
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Le plugin : Podcasts.
14 juillet 2010, parLe problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
Types de fichiers supportés dans les flux
Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...) -
Les images
15 mai 2013
Sur d’autres sites (4714)
-
How do I get FFMPEG to build a video using the same timing as my input ?
15 avril 2016, par Forest J. HandfordI’m trying to create a video of screen actions a user takes by piping screenshots to FFMPEG from a C# console application. I’m sending 10 frames per second. The final video has exactly as many frames as I sent (ie : a 10 second vid has 100 frames). The time, however, of the video does not match. With the below code I get 7m 47s worth of video from 490751 ms of input. I’ve found that PTS gets me a little closer, but it feels like I’m doing something wrong.
private const int VID_FRAME_FPS = 10;
private const double PTS = 2.4444;
/// <summary>
/// Generates the Videos by gathering frames and processing via FFMPEG.
/// Deletes the generated Frame images after successfully compiling the video.
/// </summary>
public static void RecordScreen(string pathToOutput)
{
Logger.log.Info("Launching FFMPEG ....");
String arg = "-f image2pipe -i pipe:.bmp -filter:v \"setpts = " + PTS + " * PTS\" -r " + VID_FRAME_FPS + " -pix_fmt yuv420p -qscale:v 5 -vcodec libvpx -bufsize 30000k -y \"" + pathToOutput + "\\VidOut.webm\"";
//String arg = "-f image2pipe -i pipe:.bmp -filter:v \"setpts = " + PTS + " * PTS\" -r " + VID_FRAME_FPS + " -pix_fmt yuv420p -qscale:v 5 -vcodec libx264 -bufsize 30000k -y \"" + pathToOutput + "\\VidOut.mp4\"";
Process launchingFFMPEG = new Process
{
StartInfo = new ProcessStartInfo
{
FileName = "ffmpeg",
Arguments = arg,
UseShellExecute = false,
CreateNoWindow = true,
RedirectStandardInput = true
}
};
launchingFFMPEG.Start();
System.Drawing.Image img;
Stopwatch stopWatch = Stopwatch.StartNew(); //creates and start the instance of Stopwatch
int sleep;
Stopwatch vidTime = Stopwatch.StartNew();
do
{
img = Capture.GetScreen();
img.Save(launchingFFMPEG.StandardInput.BaseStream, System.Drawing.Imaging.ImageFormat.Bmp);
img.Dispose();
sleep = 10 * VID_FRAME_FPS - (int)stopWatch.ElapsedMilliseconds;
if (sleep > 0)
{
Logger.log.Info("Captured frame, sleeping " + sleep + " milliseconds.");
Thread.Sleep(sleep);
}
stopWatch.Restart();
} while (workerThread.IsAlive);
Logger.log.Debug("Video Time: " + vidTime.ElapsedMilliseconds);
launchingFFMPEG.StandardInput.Flush();
launchingFFMPEG.StandardInput.Close();
launchingFFMPEG.Close();
}Is there a way to do this without PTS ? If I need PTS, what is the correct value ? It seems that PTS of 2.565656 is close to correct.
All the related documentation points to just using -r (the framerate command) but that doesn’t work (as I’m using it).
Note : I’m only using H.264 for debugging with ffprobe, I plan to switch back to webm when this is resolved. I’m trying to avoid H.256 and MP4 patents.
-
imagemagick gradient mask file creation
6 avril 2016, par lang2I’m playing with this creative script here : http://www.fmwconcepts.com/imagemagick/transitions/. The plan is to mimic what happens with the script with
ffmpeg
and generate video with transition effects between pictures. My current understanding is this :- I have two pictures A and B.
- I need in between a couple of pictures (say 15) that are partially A and partially B.
- To do that I use the
composite -compose src-over A.jpg B.jpg mask-n.jpg out.jpg
command. - During the process, the mask-n.jpg gets generated automatically that gradually change from all black to all white.
- Depends on the mathematically equations, the way the transition effect looks is different.
In one of the example, Fred the author gave this :
convert -size 128x128 gradient: maskfile.jpg
This will generate a image like this :
This is partially black and partially white. For the transition to work, I’ll need an all white one and an all black one and a couple of others in between. What’s the magical command to do that ?
-
Creating a video from data of server-side script
8 mars 2016, par Peter LeupoldMy plan is to display the data that a server-side script generates in a video displayable on my web page. So far my approach is the following :
- A three dimensional integer array in a C script is use to accumulate the image data. Dimensions are width, breadth and color (R, G and B).
- The array is written to a ppm-file.
- The next picture is accumulated and written and so on.
- With ffmpeg a script merges the ppm-files to a mp4-video.
Basically this works, but of course faster would be nicer. I would appreciate proposals for fundamentally different approaches as well as help on the following details :
- Is there a file format simple as ppm that uses the HEX code for colors instead of triplets ? This would reduce the size of my array as well a the number of write operations.
- Do I loose much time if I print every single value to the file with an fprintf operation instead of accumulating lines into a string ? Or do compilers optimize this kind of sequences of writes ?