
Recherche avancée
Médias (91)
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
-
Les Miserables
4 juin 2012, par
Mis à jour : Février 2013
Langue : English
Type : Texte
-
Ne pas afficher certaines informations : page d’accueil
23 novembre 2011, par
Mis à jour : Novembre 2011
Langue : français
Type : Image
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
-
Richard Stallman et la révolution du logiciel libre - Une biographie autorisée (version epub)
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (104)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
Le plugin : Podcasts.
14 juillet 2010, parLe problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
Types de fichiers supportés dans les flux
Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)
Sur d’autres sites (7014)
-
Slowing down audio using FFMPEG
24 janvier 2020, par Maxim_AFor example I have a source file which is the duration of 6.40 seconds.
I divide this duration into 10 sections. Then each section is slowed down by a certain value. This works great.ffmpeg.exe -i preresult.mp4 -filter_complex
"[0:v]trim=0:0.5,setpts=PTS-STARTPTS[vv0];
[0:v]trim=0.5:1,setpts=PTS-STARTPTS[vv1];
[0:v]trim=1:1.5,setpts=PTS-STARTPTS[vv2];
[0:v]trim=1.5:2,setpts=PTS-STARTPTS[vv3];
[0:v]trim=2:2.5,setpts=PTS-STARTPTS[vv4];
[0:v]trim=2.5:3,setpts=PTS-STARTPTS[vv5];
[0:v]trim=3:3.5,setpts=PTS-STARTPTS[vv6];
[0:v]trim=3.5:4,setpts=PTS-STARTPTS[vv7];
[0:v]trim=4:4.5,setpts=PTS-STARTPTS[vv8];
[0:v]trim=4.5:6.40,setpts=PTS-STARTPTS[vv9];
[vv0]setpts=PTS*2[slowv0];
[vv1]setpts=PTS*4[slowv1];
[vv2]setpts=PTS*5[slowv2];
[vv3]setpts=PTS*2[slowv3];
[vv4]setpts=PTS*3[slowv4];
[vv5]setpts=PTS*6[slowv5];
[vv6]setpts=PTS*3[slowv6];
[vv7]setpts=PTS*5[slowv7];
[vv8]setpts=PTS*2[slowv8];
[vv9]setpts=PTS*6[slowv9];
[slowv0][slowv1][slowv2][slowv3][slowv4][slowv5][slowv6][slowv7][slowv8][slowv9]concat=n=10:v=1:a=0[v1]"
-r 30 -map "[v1]" -y result.mp4Then I needed to slow down along with the video and audio stream. In the documentation I found out about the
atempo
filter. The documentation says that the extreme boundaries of the value of this filter are from 0.5 to 100. To slow down by half, you need to use the value 0.5. I also learned that if you need to slow down the audio by 4 times, then you just need to apply two filters.[aa0]atempo=0.5[aslowv0] //Slowdown x2
[aa0]atempo=0.5, atempo=0.5[aslowv0] //Slowdown x4Question 1 :
How can i slow down audio an odd number of times ? for example, slow down audio by 3.5.7 times. There is no explanation of this point in the documentation.Question 2 :
Do i understand correctly that if you slow down separately the audio stream and the video stream, they will have the same duration ?Thank you all in advance !
-
FFMPEG - frame PTS and DTS increasing faster than it should
20 juillet 2022, par hi im BaconI am pulling footage from an RTSP camera, standardising it, and then segmenting the footage for processing. I am standardising by reducing the resolution and setting the frame rate to 12 fps.


I am encoding in the PTS times the wall time of each frame as the camera is a live source and I'd like to be able to know exactly when a frame is occurring (I'm not fussed that it's not going to be perfectly on, if it's all out by a second or two because of latency that is fine by me)


FFMPEG is run from python subprocessing using the following command :


command = [
 "ffmpeg", "-y",
 "-rtsp_transport", "tcp", URL,
 "-pix_fmt", "yuvj422p",
 "-c:v", "libx264", # Change the video codec to the kinesis required codec
 "-an", # Remove any audio channels
 "-vf", "scale='min(1280,iw)':'min(720,ih)':force_original_aspect_ratio=decrease",
 "-r", "12",
 "-vsync", "cfr",
 "-x264opts", "keyint=12:min-keyint=12",,
 "-f", "segment", # Set the output format as chuncked segments
 "-segment_format", segment_format, # Set each segments format E.G. matroska, mp4
 "-segment_time", str(segment_length), # Set the length of the segment in seconds
 "-initial_offset", str(initial_pts_offset),
 "-strftime", "1", # Use the strformat notication in naming of the video segements
 "{}/%Y-%m-%dT%H%M%S.{}".format(directory, extension) # Define the name and location of the segments,
 '-loglevel', 'error'
]



The problem I am having is that the timestamps of the frames increase at a faster than real time rate. The initial offset is set to the time that FFMPEG is kicked off, the frames received should always be less than right now. I am using a segment length of 30 seconds and after only 5 minutes, finished segments will have a start timestamp greater than the present wall time.


The rate of increase looks around 3-4 times faster than it should.


Why is this the case ? how can I avoid this ? is my understand of
-r
right ?

I believed that
-r
drops extra frames, evens out the frame times creating new frames where needed, but not actually changing the perceived speed of the footage. The final frame time should not be greater than the segment length away from the first frame time.

I have tried using a system (filter) that sets the PTS time according to the consumer wall time
setpts='time(0)/TB'
but this has led to quite choppy footage as the frames can be received/processed at different rates based on the connection.

The quality of the segments is great, all the data is there... just getting the times right is seeming impossible.


-
Scene detection and concat makes my video longer (FFMPEG)
12 avril 2019, par araujoI’m encoding videos by scenes. At this moment I got two solutions in order to do so. The first one is using a Python application which gives me a list of frames that represent scenes. Like this :
285
378
553
1145
...The first scene begins from the frame 1 to 285, the second from 285 to 378 and so on. So, I made a bash script which encodes all this scenes. Basically what it does is to take the current and previous frames, then convert them to time and finally run the ffmpeg command :
begin=$(awk 'BEGIN{ print "'$previous'"/"'24'" }')
end=$(awk 'BEGIN{ print "'$current'"/"'24'" }')
time=$(awk 'BEGIN{ print "'$end'"-"'$begin'" }')
ffmpeg -i $video -r 24 -c:v libx265 -f mp4 -c:a aac -strict experimental -b:v 1.5M -ss $begin -t $time "output$count.mp4" -nostdinThis works perfect. The second method is using ffmpeg itself. I run this commands and gives me a list of times. Like this :
15.75
23.0417
56.0833
71.2917
...Again I made a bash script that encodes all these times. In this case I don’t have to convert to times because what I got are times :
time=$(awk 'BEGIN{ print "'$current'"-"'$previous'" }')
ffmpeg -i $video -r 24 -c:v libx265 -f mp4 -c:a aac -strict experimental -b:v 1.5M -ss $previous -t $time "output$count.mp4" -nostdinAfter all this explained it comes the problem. Once all the scenes are encoded I need to concat them and for that what I do is to create a list with the video names and then run the ffmpeg command.
list.txt
file 'output1.mp4'
file 'output2.mp4'
file 'output3.mp4'
file 'output4.mp4'command :
ffmpeg -f concat -i list.txt -c copy big_buck_bunny.mp4
The problem is that the "concated" video is longer than the original by 2.11 seconds. The original one lasts 596.45 seconds and the encoded lasts 598.56. I added up every video duration and I got 598.56. So, I think the problem is in the encoding process. Both videos have the same frames number. My goal is to get metrics about the encoding process, when I run VQMT to get the PSNR and SSIM I get weird results, I think is for this problem.
By the way, I’m using the big_buck_bunny video.