
Recherche avancée
Autres articles (28)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Problèmes fréquents
10 mars 2010, parPHP et safe_mode activé
Une des principales sources de problèmes relève de la configuration de PHP et notamment de l’activation du safe_mode
La solution consiterait à soit désactiver le safe_mode soit placer le script dans un répertoire accessible par apache pour le site
Sur d’autres sites (4327)
-
calling ffmpeg from python in a loop
22 juin 2018, par Steffen LeschI am currently writing my first python script to split large audio files into smaller ones. (Splitting Albums into individual tracks) :
import subprocess
def genList():
with open("commands.txt") as file:
ffmpeg_template_str = 'ffmpeg -i audio.FLAC -acodec copy -ss START_TIME -to END_TIME LITTLE_FILE'
lines = file.readlines()
results = []
for line in lines:
argument_list = line.split(' ')
result = ffmpeg_template_str
results.append(result.replace('START_TIME', argument_list[0]).replace('END_TIME', argument_list[1]).replace('LITTLE_FILE', argument_list[2]))
return results;
def split():
commands = genList()
for command in commands:
subprocess.call(command.split(' '))
split()When i execute the script, many command line windows will pop up, but only the last delivers the desired result.
So if i want to split an audio file into smaller files only the last split operation seems to executed correctly.
Additionally if i dont use a for loop and just paste subprocess.call multiple times into the code it works just fine.
subprocess.call(command1)
subprocess.call(command2)
subprocess.call(command3)Any help will be greatly appreciated.
-
Extracting Y only (of YUV420p) frame from an MP4 file using fmpeg ?
7 janvier 2020, par Anidh SinghMy main objective is to extract the I’th, I+1’th (next), I-1’th (previous) frames in the form of Y only (of YUV 420) from an mp4 video. The procedure which I am using right now is
-
I extracted the list of all the I frames from a video using the command -
ffprobe "input.mp4" -show_frames | grep 'pict_type=I' -A 1 > frame_info.txt
-
Next, I used a python script to parse through this txt file to find the numbers of all of the I frames and then extracting all of the frames using the command -
ffmpeg -i input.mp4 -vf select='eq(n\,{1}),setpts=N/25/TB,extractplanes=y' -vsync 0 -pix_fmt gray {1}.yuv
This is happening via a subprocess call from python. -
This is working fine for small resolution videos like 240p or 480p but as soon as I move to 1080p videos the time to extract even a single frame increases exponentially. As the ffmpeg seeks to that frame number to extract it and it has to decode the mp4 file till that point.
I have a lot of 1080p files and I was looking to decrease the time. The solution which I was thinking was to extract all of the Y frames (of YUV 420) from mp4 and then selecting only I frames as I’ve got the list of all of the I frames from step 1.. The command I am using for this is -
ffmpeg -y -i input.mp4 -vf "fps=59.94" -pix_fmt gray file_name.yuv
-
The problem with the above code is that it continuously appends the to yuv file only but I want an individual y file for one frame of the mp4 video.
-
My restriction is to use FFmpeg only as FFmpeg’s Y value is matching with what I want.
TL:DR - I want to extract the Y part only (of YUV 420p) from an mp4 video. The y frames are the I’th and I-1th and I+1th frames.
Thanks for helping out.
-
-
Recording RTSP steam with Python
6 mai 2022, par ロジャーCurrently I am using MediaPipe with Python to monitor RTSP steam from my camera, working as a security camera. Whenever the MediaPipe holistic model detects humans, the script writes the frame to a file.


i.e.


# cv2.VideoCapture(RTSP)
# read frame
# while mediapipe detect
# cv2.VideoWriter write frame
# store file



Recently I want to add audio recording support. I have done some research that it is not possible to record audio with OpenCV. It has to be done with FFMPEG or PyAudio.


I am facing these difficulities.


- 

-
When a person walk through in front of the camera, it takes maybe less than 2 seconds. For the RTSP stream being read by OpenCV, human is detected with MediaPipe, and start FFMPEG for recording, that human should have walked far far away already. So FFMPEG method seems not working for me.


-
For PyAudio method I am currently studying, I need to create 2 threads establishing individual RTSP connections. One thread is for video to be read by OpenCV and MediaPipe. The other thread is for audio to be recorded when the OpenCV thread notice human is detected. I have tried using several devices to read the RTSP streams. The devices are showing timestamps (watermark on the video) with several seconds in difference. So I doubt if I can get video from OpenCV and audio from PyAudio in sync when merging them into one single video.








Is there any suggestion how to solve this problem ?


Thanks.


-