
Recherche avancée
Autres articles (42)
-
D’autres logiciels intéressants
12 avril 2011, parOn ne revendique pas d’être les seuls à faire ce que l’on fait ... et on ne revendique surtout pas d’être les meilleurs non plus ... Ce que l’on fait, on essaie juste de le faire bien, et de mieux en mieux...
La liste suivante correspond à des logiciels qui tendent peu ou prou à faire comme MediaSPIP ou que MediaSPIP tente peu ou prou à faire pareil, peu importe ...
On ne les connais pas, on ne les a pas essayé, mais vous pouvez peut être y jeter un coup d’oeil.
Videopress
Site Internet : (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)
Sur d’autres sites (4698)
-
How to use the actual frame numbers in filenames using ffmpeg when extracting frames ? [closed]
2 juin 2024, par Joan VengeBasically I am using ffmpeg to extract every Nth frame from a video. But the filenames appear sequentially from 1 to X. I want to use the actual frame numbers so if it's every 30th frame, then the filenames should be 0, 30, 60, etc. Is this possible ?


I am doing this in Python using this function :


def extract_and_compress_frames(directory, frame_interval=1, crop_width=192, crop_height=108, offset_x=0, offset_y=0):
 for filename in os.listdir(directory):
 if filename.endswith(".trec"):
 # Construct full file path
 trec_path = os.path.join(directory, filename)
 mp4_path = os.path.join(directory, filename.replace(".trec", ".mp4"))
 frames_dir = os.path.join(directory, filename.replace(".trec", ""))
 
 # Rename .trec to .mp4
 os.rename(trec_path, mp4_path)
 
 # Create directory for frames
 os.makedirs(frames_dir, exist_ok=True)
 
 if frame_interval == 1:
 # Calculate crop positions for bottom-right corner after scaling to 1080p
 scaled_width = 1920
 scaled_height = 1080
 crop_x = scaled_width - crop_width - offset_x
 crop_y = scaled_height - crop_height - offset_y
 
 # Extract and compress all frames using ffmpeg with quality adjustment
 ffmpeg_cmd = [
 'ffmpeg', 
 '-i', mp4_path,
 '-vf', f"fps=60,scale=1920:1080,crop={crop_width}:{crop_height}:{crop_x}:{crop_y}", 
 '-q:v', '10', # Adjust the quality, 1 (best) to 31 (worst), 2 for good quality
 os.path.join(frames_dir, '%06d.jpg') # 6 digits for padding, starting from 0
 ]
 else:
 # Use frame interval and scale to 720p
 ffmpeg_cmd = [
 'ffmpeg',
 '-reinit_filter', '0',
 '-i', mp4_path,
 '-vf', f"fps=60,scale=1920:1080,crop={crop_width}:{crop_height}:{1920-crop_width-offset_x}:{1080-crop_height-offset_y},drawtext=text='%{{n}}':start_number=1:fontcolor=white:bordercolor=black:borderw=3:fontsize=50,select='not(mod(n\\,{frame_interval}))'",
 '-fps_mode', 'vfr',
 '-q:v', '10', # Adjust the quality, 1 (best) to 31 (worst), 2 for good quality
 os.path.join(frames_dir, '%06d.jpg') # 6 digits for padding, starting from 0
 ]
 
 subprocess.run(ffmpeg_cmd)
 
 # Rename back to .trec
 os.rename(mp4_path, trec_path)
 
 print(f"Processed {filename}")



But this gives me sequential numbers, not the actual frame numbers but the frame numbers I draw over the images, they represent the actual frame numbers.


Basically ffmpeg is able to get the current frame and draw it on the image using :


drawtext=text='%{{n}}'



The question is about getting that value out to the filenames.


-
FFMPEG screen grab of windows desktop and then exporting to multiple bandwidth ts and m3u8 files with master playlist
26 juillet 2018, par Tank DanielsForgive me as this is my first post as I’m really stuck and need help. My boss needs me to screen capture a windows desktop as input and be able to output at least four outputs dependent on bandwidth. Each output "Stream"(im guessing is the right terminology) must have its own .ts files and .m3u8 files, along with a master m3u8 playlist that lists the four individual m3u8 files.
Every attempt ive made only produces a master m3u8 file with a single listing of an output stream. (Code enclosed below).
The end result I need to produce is to have a master m3u8 that can go into a webpage that can stream mp4 hls of the users screen grab and that video output will be changed dependant upon the users bandwidth available. If you can please help me with any advice I will be eternally grateful as I need to impress my new boss in my new job, and I’ve been working on this for ages now with no joy.
Example :
This is what I need it to look like.What the M3U8 needs to look like
Example :
This is what it actually looks like now.What the current M3U8 actually looks like when i run my code
Below is the code I am currently using. Please help.
ffmpeg -f dshow -i video="UScreenCapture":audio="virtual-audio-capturer" -master_pl_name master.m3u8 -master_pl_publish_rate 30 -tune fastdecode -vf scale=w=640:h=360:force_original_aspect_ratio=decrease -c:a aac -ar 48000 -c:v h264 -crf 20 -sc_threshold 0 -g 48 -keyint_min 48 -hls_time 4 -hls_playlist_type vod -b:v 800k -maxrate 856k -bufsize 1200k -b:a 96k -hls_segment_filename 360p_%03d.ts 360p.m3u8 -master_pl_name master.m3u8 -master_pl_publish_rate 30 -vf scale=w=842:h=480:force_original_aspect_ratio=decrease -c:a aac -ar 48000 -c:v h264 -crf 20 -sc_threshold 0 -g 48 -keyint_min 48 -hls_time 4 -hls_playlist_type vod -b:v 1400k -maxrate 1498k -bufsize 2100k -b:a 128k -hls_segment_filename 480p_%03d.ts 480p.m3u8 -master_pl_name master.m3u8 -master_pl_publish_rate 30 -vf scale=w=1280:h=720:force_original_aspect_ratio=decrease -c:a aac -ar 48000 -c:v h264 -crf 20 -sc_threshold 0 -g 48 -keyint_min 48 -hls_time 4 -hls_playlist_type vod -b:v 2800k -maxrate 2996k -bufsize 4200k -b:a 128k -hls_segment_filename 720p_%03d.ts 720p.m3u8 -master_pl_name master.m3u8 -master_pl_publish_rate 30 -vf scale=w=1920:h=1080:force_original_aspect_ratio=decrease -c:a aac -ar 48000 -c:v h264 -crf 20 -sc_threshold 0 -g 48 -keyint_min 48 -hls_time 4 -hls_playlist_type vod -b:v 5000k -maxrate 5350k -bufsize 7500k -b:a 192k -hls_segment_filename 1080p_%03d.ts 1080p.m3u8
Many Thanks for looking. Chris.
-
ffmpeg video replay with time-sync needs actual recording times
16 juillet 2018, par navySVI am attempting to use ffmpeg to replay multiple video files time-synched, but the zero-based video start time is preventing this.
I have ffmpeg commands to successfully capture a Microsoft Windows 7 desktop into a video file and replay it with a timestamp value (see below), but the internal timestamp is always starting near zero. How can ffmpeg display the actual time when the video was recorded (and not the time since the start of the video i.e. zero) ?
For example, if the video started to be recorded at 10:47 am, the ffplay command should display a timestamp similar to "10:47:31" during playback (and not "00:00:31").
video-capture command :
ffmpeg -f gdigrab -offset_x 0 -offset_y 0 -video_size 1920x1080 -i desktop -c:v libx264 -preset medium -f mpegts -framerate 24 -y fileA.ts
playback command :
ffplay -vf "drawtext=fontfile=/windows/fonts/arial.ttf: text='%{pts\:gmtime\:0\:%H\\\:%M\\\:%S}':box=1:x=(w-tw)/2:y=h-(2*lh)" fileA.ts
parameters I’ve tried unsuccessfully in the previous commands (including moving these around into different places in the commands) :
-timestamp now
-vsync 0
-copyts(every attempt to use -copyts generates errors about "non-strictly-monotonic PTS" or "Non-monotonous DTS in output stream" no matter where I put this parameter)
-filter_complex "[0:v] setpts=PTS"
The ultimate goal is to capture four video files (recorded on four different computers and probably having different start times), and then to replay all four in time-sync (which is not possible using only the zero-based start times).
For example, I’ve been successful at replaying four video files in a 2x2 arrangement, using the following command (I added the -ss parameter to demonstrate I can move the start time of the replay). Unfortunately, they always time-sync to the zero-based first video frame (so they all play from the beginning of the video file). I need the replay to be time-syncing to the actual recorded time for each video. If the four videos were captured starting at times 10:47:00, 10:47:51, 10:48:44, and 10:49:01, I want to be able to replay all of them so that all are displaying the same timestep at the same time (so if one video were displaying 10:48:33, all of the videos would be displaying the same time or a blank screen if that time was unavailable) .
ffmpeg -ss 00:00:30 -i fileA.ts -i fileB.ts -i fileC.ts -i fileD.ts -filter_complex "[0:v][1:v]hstack[top];[2:v][3:v]hstack[bottom];[top][bottom]vstack[v]" -map "[v]" -timestamp now -f mpegts - | ./ffplay - -x 1920 -y 1080
Ideally, I would also like to be able to use a real time value (something like "ffplay -ss 10:48:00 ...") to start the video replay at a different position, but worst-case I can write a script to do the needed conversion of the time value.
My ffmpeg version is a Windows 7 64-bit static build "N-90810-g153e920892" on 2018Apr22 (downloaded from https://www.ffmpeg.org/download.html)