Recherche avancée

Médias (91)

Autres articles (111)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

Sur d’autres sites (4363)

  • Recording RTSP steam with Python

    6 mai 2022, par ロジャー

    Currently I am using MediaPipe with Python to monitor RTSP steam from my camera, working as a security camera. Whenever the MediaPipe holistic model detects humans, the script writes the frame to a file.

    


    i.e.

    


    # cv2.VideoCapture(RTSP)
# read frame
# while mediapipe detect
#   cv2.VideoWriter write frame
# store file


    


    Recently I want to add audio recording support. I have done some research that it is not possible to record audio with OpenCV. It has to be done with FFMPEG or PyAudio.

    


    I am facing these difficulities.

    


      

    1. When a person walk through in front of the camera, it takes maybe less than 2 seconds. For the RTSP stream being read by OpenCV, human is detected with MediaPipe, and start FFMPEG for recording, that human should have walked far far away already. So FFMPEG method seems not working for me.

      


    2. 


    3. For PyAudio method I am currently studying, I need to create 2 threads establishing individual RTSP connections. One thread is for video to be read by OpenCV and MediaPipe. The other thread is for audio to be recorded when the OpenCV thread notice human is detected. I have tried using several devices to read the RTSP streams. The devices are showing timestamps (watermark on the video) with several seconds in difference. So I doubt if I can get video from OpenCV and audio from PyAudio in sync when merging them into one single video.

      


    4. 


    


    Is there any suggestion how to solve this problem ?

    


    Thanks.

    


  • Can't read mp4 or avi in opencv ubuntu

    15 novembre 2016, par Diederik

    I’m trying to read in a .mp4 (or .avi) file using this script :

    import cv2
    import math
    import sys
    import numpy as np

    class VideoAnalysis:

       def __init__(self, mp4_video):
           self.video = mp4_video
           self.pos_frame = self.video.get(cv2.cv.CV_CAP_PROP_POS_FRAMES)

       def run(self):
           while True:
               flag, frame = self.video.read()
               if flag:
                   cv2.imshow("frame",frame)
                   self.pos_frame = self.video.get(cv2.cv.CV_CAP_PROP_POS_FRAMES)
                   print(str(self.pos_frame)+" frames")
               else:
                   # The next frame is not ready, so we try to read it again
                   self.video.set(cv2.cv.CV_CAP_PROP_POS_FRAMES, self.pos_frame-1)
                   print("frame is not ready")
                   # It is better to wait for a while for the next frame to be ready
                   cv2.waitKey(1000)
               if cv2.waitKey(1) & 0xFF == ord('q'):
                   break

    cap = cv2.VideoCapture("opdracht.mp4")
    while not cap.isOpened():
       cap = cv2.VideoCapture("opdracht.mp4")
       print("Wait for the header")

    video_analyse = VideoAnalysis(cap)
    video_analyse.run()

    I started of by just using python 2.7 and opencv. At that moment it kept spinning in the loop ’waiting for header’, after some research I learned that I had to install ffmpeg, so I tried using sudo apt-get install ffmpeg and it installed but then the script came to be stuck in the loop "frame not ready". After some reading I learned that maybe I had to recompile both ffmpeg and opencv from source, so I did. I now run ffmpeg 3.2 and OpenCV 2.4.13 but still opencv can’t read a single frame from my video (which is in the same folder as the script).

    I really don’t understand what I am doing wrong.

  • FFMPEG Scene Detection - Trim Blank Background from Video at the Start and End

    10 juin 2014, par user3521682

    Summary :
    Need to programatically trim video when scene is not changing at the beginning & end.

    Example video : http://www.filehosting.co.nz/finished3.mp4
    (Quality is much higher in the real video)

    Background :

    Large number of videos for online store, each video begins with a blank background, then the model walks on (at a random time, few seconds), and then walks off after a random time (around 15 seconds). The end of the video is trimmed seemingly random ; could be up to 15 seconds of ’nothing’ at the end of the video.

    The camera does not move. There is no sound on the videos.
    The videos come from a camera in MOV format, sideways.

    I already have FFMPEG converting from MOV to MP4, rotating the video, adding an audio-track, and trimming the audio at the end of the video.

    Research :
    I understand that I should probably re-encode video with a very high (?) tolerance for i-frames (so that only two are made per video) and then export the times to a text file, and use the text file to cut the video (probably parse it in BASH and use that to build the FFMPEG commands).

    Does anyone have any idea how I could generate just two key-frames per video ?

    Example video : http://www.filehosting.co.nz/finished3.mp4
    (Quality is much higher in the real video)