
Recherche avancée
Autres articles (96)
-
Les vidéos
21 avril 2011, parComme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...) -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
Librairies et logiciels spécifiques aux médias
10 décembre 2010, parPour un fonctionnement correct et optimal, plusieurs choses sont à prendre en considération.
Il est important, après avoir installé apache2, mysql et php5, d’installer d’autres logiciels nécessaires dont les installations sont décrites dans les liens afférants. Un ensemble de librairies multimedias (x264, libtheora, libvpx) utilisées pour l’encodage et le décodage des vidéos et sons afin de supporter le plus grand nombre de fichiers possibles. Cf. : ce tutoriel ; FFMpeg avec le maximum de décodeurs et (...)
Sur d’autres sites (6233)
-
Change the frame order/sequence of a video (avconv/ffmpeg)
2 novembre 2014, par Simon StreicherI want to use Python and a similar method explained by http://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg :
import subprocess as sp
command = [ 'ffmpeg',
'-y', # (optional) overwrite output file if it exists
'-f', 'rawvideo',
'-vcodec','rawvideo',
'-s', '420x360', # size of one frame
'-pix_fmt', 'rgb24',
'-r', '24', # frames per second
'-i', '-', # The imput comes from a pipe
'-an', # Tells FFMPEG not to expect any audio
'-vcodec', 'mpeg'",
'my_output_videofile.mp4' ]
pipe = sp.Popen( command, stdin=sp.PIPE, stderr=sp.PIPE)
pipe.proc.stdin.write(image_array.tostring())to write an image array as frames to ffmpeg. In my application I will write the frames to the buffer as they are completed (not all at once as above).
want to apply time-distortion to the output video from a "frame-map" :
Frame order In : [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Frame order Out: [1, 1, 2, 3, 3.5, 4, 4.5, 6, 7, 8, 9.5 10]Is there any way I can pipe in the frame sequence ?
I know I can just program my script to sent the frames in the correct order, but I was kind of hoping to discard the frames as I am done with them (I am talking about long HD videos) and let avconv/ffmpeg handle the ordering and inter-frame averaging (for example frame 3.5).
My other option is to read the input frames at the output frame speed (by playing catchup) and keep the last 300 frames or so. Beforehand I can make sure my pipe wouldn’t need a 301-or-more-frames-back frame. I can even make it more robust by keeping track of any 301-or-more-frames-back frames and make an exception for storing them.
So could ffmpeg/avconv handle all this drama on its own, or do I have to code this up ?
-
How to provide clipping information to ffmpeg to help video encoding ?
24 septembre 2014, par krissI have written a software that create a movie using ffmpeg video encoding API. Basically I call avcode_encode_video() for each image of the movie as explained here
But in my use case I actually have many informations available about the image that could greatly help the encoder (and hopefully makes the encoding process much faster). The video I’m creating is a sequence of consecutive computer screen captures and for instance I know the list of clipping rectangles of the parts of the screen that actually changed between two frames. In many cases this is reduced to tiny parts of screen like some mouse movement or clock updates.
Is there any way using the ffmpeg C API to provide this information to encoder ?
If not is there any other free encoder providing an API that could use that kind of informations ?
-
how to pass custom values into this Android ffmpeg method
28 août 2014, par kc ochibilii am trying to call this method. but i dont know how to pass my own values into it
Like : the output length, or FrameRate.The code uses some letters like "-y" "-i" which they explained without really explaining what the letters meant
here is the method from https://github.com/guardianproject/android-ffmpeg-java/blob/master/src/org/ffmpeg/android/FfmpegController.java#L488
/*
* ffmpeg -y -loop 0 -f image2 -r 0.5 -i image-%03d.jpg -s:v 1280x720 -b:v 1M \
-i soundtrack.mp3 -t 01:05:00 -map 0:0 -map 1:0 out.avi
-loop_input – loops the images. Disable this if you want to stop the encoding when all images are used or the soundtrack is finished.
-r 0.5 – sets the framerate to 0.5, which means that each image will be shown for 2 seconds. Just take the inverse, for example if you want each image to last for 3 seconds, set it to 0.33.
-i image-%03d.jpg – use these input files. %03d means that there will be three digit numbers for the images.
-s 1280x720 – sets the output frame size.
-b 1M – sets the bitrate. You want 500MB for one hour, which equals to 4000MBit in 3600 seconds, thus a bitrate of approximately 1MBit/s should be sufficient.
-i soundtrack.mp3 – use this soundtrack file. Can be any format.
-t 01:05:00 – set the output length in hh:mm:ss format.
out.avi – create this output file. Change it as you like, for example using another container like MP4.
*/
public Clip createSlideshowFromImagesAndAudio (ArrayList<clip> images, Clip audio, Clip out, int durationPerSlide, ShellCallback sc) throws Exception
{
final String imageBasePath = new File(mFileTemp,"image-").getCanonicalPath();
final String imageBaseVariablePath = imageBasePath + "%03d.jpg";
ArrayList<string> cmd = new ArrayList<string>();
String newImagePath = null;
int imageCounter = 0;
Clip imageCover = images.get(0); //add the first image twice
cmd = new ArrayList<string>();
cmd.add(mFfmpegBin);
cmd.add("-y");
cmd.add("-i");
cmd.add(new File(imageCover.path).getCanonicalPath());
if (out.width != -1 && out.height != -1)
{
cmd.add("-s");
cmd.add(out.width + "x" + out.height);
}
newImagePath = imageBasePath + String.format(Locale.US, "%03d", imageCounter++) + ".jpg";
cmd.add(newImagePath);
execFFMPEG(cmd, sc);
for (Clip image : images)
{
cmd = new ArrayList<string>();
cmd.add(mFfmpegBin);
cmd.add("-y");
cmd.add("-i");
cmd.add(new File(image.path).getCanonicalPath());
if (out.width != -1 && out.height != -1)
{
cmd.add("-s");
cmd.add(out.width + "x" + out.height);
}
newImagePath = imageBasePath + String.format(Locale.US, "%03d", imageCounter++) + ".jpg";
cmd.add(newImagePath);
execFFMPEG(cmd, sc);
}
//then combine them
cmd = new ArrayList<string>();
cmd.add(mFfmpegBin);
cmd.add("-y");
cmd.add("-loop");
cmd.add("0");
cmd.add("-f");
cmd.add("image2");
cmd.add("-r");
cmd.add("1/" + durationPerSlide);
cmd.add("-i");
cmd.add(imageBaseVariablePath);
cmd.add("-strict");
cmd.add("-2");//experimental
String fileTempMpg = new File(mFileTemp,"tmp.mpg").getCanonicalPath();
cmd.add(fileTempMpg);
execFFMPEG(cmd, sc);
//now combine and encode
cmd = new ArrayList<string>();
cmd.add(mFfmpegBin);
cmd.add("-y");
cmd.add("-i");
cmd.add(fileTempMpg);
if (audio != null && audio.path != null)
{
cmd.add("-i");
cmd.add(new File(audio.path).getCanonicalPath());
cmd.add("-map");
cmd.add("0:0");
cmd.add("-map");
cmd.add("1:0");
cmd.add(Argument.AUDIOCODEC);
cmd.add("aac");
cmd.add(Argument.BITRATE_AUDIO);
cmd.add("128k");
}
cmd.add("-strict");
cmd.add("-2");//experimental
cmd.add(Argument.VIDEOCODEC);
if (out.videoCodec != null)
cmd.add(out.videoCodec);
else
cmd.add("mpeg4");
if (out.videoBitrate != -1)
{
cmd.add(Argument.BITRATE_VIDEO);
cmd.add(out.videoBitrate + "k");
}
cmd.add(new File(out.path).getCanonicalPath());
execFFMPEG(cmd, sc);
return out;
}
</string></string></string></string></string></string></clip>so, say i want an out put video that has a
framerate --> 2sec
output frame size --> 480 x 480
output lenght of--> 02:08:00
and output file type --> .mp4How can i do call this method with these values ?
and how do they relate to the letters used above.