Recherche avancée

Médias (91)

Autres articles (36)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (5354)

  • FFMPEG and Python : Stream a frame into video

    17 août 2023, par Vasilis Lemonidis

    Old approach

    


    I have created a small class for the job. After the streaming of the third frame I get an error from FFMPEG :

    


    pipe:0: Invalid data found when processing input

    


    and then I get a broken pipe.

    


    I have a feeling my ffmpeg input arguments are incorrect, I have little experience with the tool. Here is the code :

    


    import subprocess
import os
import cv2
import shutil
class VideoUpdater:
    def __init__(self, video_path: str, framerate: int):
        assert video_path.endswith(".flv")
        self._ps = None

        self.video_path = video_path
        self.framerate = framerate
        self._video = None
        self.curr_frame = None
        if os.path.isfile(self.video_path):
            shutil.copyobj(self.video_path, self.video_path + ".old")
            cap = cv2.VideoCapture(self.video_path + ".old")
            while cap.isOpened():
                ret, self.curr_frame = cap.read()
                if not ret:
                    break
                if len(self.curr_frame.shape) == 2:
                    self.curr_frame = cv2.cvtColor(self.curr_frame, cv2.COLOR_GRAY2RGB)
                self.ps.stdin.write(self.curr_frame.tobytes())

    @property
    def ps(self) -> subprocess.Popen:
        if self._ps is None:
            framesize = self.curr_frame.shape[0] * self.curr_frame.shape[1] * 3 * 8
            self._ps = subprocess.Popen(
                f"ffmpeg  -i pipe:0 -vcodec mpeg4 -s qcif -frame_size {framesize} -y {self.video_path}",
                shell=True,
                stdin=subprocess.PIPE,
                stdout=sys.stdout,
            )
        return self._ps

    def update(self, frame: np.ndarray):
        if len(frame.shape) == 2:
            frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2RGB)
        self.curr_frame = frame
        self.ps.stdin.write(frame.tobytes())


    


    and here is a script I use to test it :

    


        import os
    import numpy as np
    import cv2

    size = (300, 300, 3)
    img_array = [np.random.randint(255, size=size, dtype=np.uint8) for c in range(50)]

    tmp_path = "tmp.flv"
    tmp_path = str(tmp_path)
    out = VideoUpdater(tmp_path, 1)

    for i in range(len(img_array)):
        out.update(img_array[i])


    


    Update closer to what I want

    


    Having further studied how ffmpeg internals work, I went for an approach without pipes, where a video of a frame is made and appended to the .ts file at every update :

    


    import tmpfile
import cv2
from tempfile import NamedTemporaryFile
import subprocess
import shutil
import os


class VideoUpdater:
    def __init__(self, video_path: str, framerate: int):
        if not video_path.endswith(".mp4"):
            LOGGER.warning(
                f"File type {os.path.splitext(video_path)[1]} not supported for streaming, switching to ts"
            )
            video_path = os.path.splitext(video_path)[0] + ".mp4"

        self._ps = None
        self.env = {
        }
        self.ffmpeg = "ffmpeg "
        self.video_path = video_path
        self.ts_path = video_path.replace(".mp4", ".ts")
        self.tfile = None
        self.framerate = framerate
        self._video = None
        self.last_frame = None
        self.curr_frame = None

    def update(self, frame: np.ndarray):
        if len(frame.shape) == 2:
            frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGR)
        else:
            frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
        self.writeFrame(frame)

    def writeFrame(self, frame: np.ndarray):
        tImLFrame = NamedTemporaryFile(suffix=".png")
        tVidLFrame = NamedTemporaryFile(suffix=".ts")

        cv2.imwrite(tImLFrame.name, frame)
        ps = subprocess.Popen(
            self.ffmpeg
            + rf"-loop 1 -r {self.framerate} -i {tImLFrame.name} -t {self.framerate} -vcodec libx264 -pix_fmt yuv420p -y {tVidLFrame.name}",
            env=self.env,
            shell=True,
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
        )
        ps.communicate()
        if os.path.isfile(self.ts_path):
            # this does not work to watch, as timestamps are not updated
            ps = subprocess.Popen(
                self.ffmpeg
                + rf'-i "concat:{self.ts_path}|{tVidLFrame.name}" -c copy -y {self.ts_path.replace(".ts", ".bak.ts")}',
                env=self.env,
                shell=True,
                stdout=subprocess.PIPE,
                stderr=subprocess.PIPE,
            )
            ps.communicate()
            shutil.move(self.ts_path.replace(".ts", ".bak.ts"), self.ts_path)

        else:
            shutil.copyfile(tVidLFrame.name, self.ts_path)
        # fixing timestamps, we dont have to wait for this operation
        ps = subprocess.Popen(
            self.ffmpeg
            + rf"-i {self.ts_path} -fflags +genpts -c copy -y {self.video_path}",
            env=self.env,
            shell=True,
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
        )
        tImLFrame.close()
        tVidLFrame.close()



    


    As you may notice, a timestamp correction needs to be performed. By reading the final mp4 file, I saw however that it consistently has 3 fewer frames than the ts file, the first 3 frames are missing. Does anyone have an idea why this is happening

    


  • ffmpeg re-encode rtsp stream to H264 ONLY if stream is not H264

    26 décembre 2022, par logic instant

    I have 100 rtsp cams streaming to my server, which does RTSP —> HLS for web viewers.

    


    Most (about 90) rtsp cams are H264, but some can't be configured back to HEVC for some reasons.

    


    Is there a command in FFMPEG to :

    


      

    1. re-encode to H264 (preferrably using libx264) if stream is not H264
    2. 


    3. just do copy-frame if stream is already H264
 ?
    4. 


    


    ps : it's run using python/node with no-shell mode, so I'm not sure if bash-redirects/pipes work

    


    tried ffmpeg commands

    


  • FFMPEG output to the Exact Folder using Python

    6 août 2021, par Ande Caleb

    i'm working on a simple script using ffmpeg, to reduce the size of a video and add watermark to the video, then move the final output into the compressed folder... this is my script.

    


    the compression works, the watermark works, but the issue i'm having is that the final output is placed in the root folder, and not in the compressed folder... below i my folder structure and my scripts

    


    Folder Structure

    


       rootfolder
    |
    |--media
       |--vids
          |--(video files, mov, mp4s)..
       |--compressed
    |--encode.py


    


    Script (encode.py) file

    


    import os    
import subprocess
from pathlib import Path


dir_path = os.path.dirname(os.path.realpath(__file__))    
vidfile = dir_path + '/media/vids/mv1.mov'    
watermark = dir_path + '/media/watermark.png'
compressed = str(Path.cwd() / '/media/compressed/')

# 1. compress the video and store it in the media out folder

media_out = str(dir_path + "/compressed_mv1s.mov").replace(" ", "\\ ") 
subprocess.run("ffmpeg -i " + vidfile.replace(" ", "\\ ") +
               " -vcodec libx264 -crf 22 " + media_out, shell=True)  

#2.add watermark to the video and move it to the compressed folder 

media_watermarked = str(compressed + '/w_mv1.mov').replace(" ", "\\ ")
subprocess.run("ffmpeg -i " + media_out + " -i " + watermark +
               " -filter_complex \"overlay=main_w-(overlay_w+10) : main_h-(10+overlay_h)\" " + media_watermarked, shell=True)


    


    in summary, compressing the video works, adding watermark works, but the last line, the error is from the media_watermarked variable, i'm not sure what i'm doing wrong but it isn't resolving the folder correctly moving the final output to the folder.. this is the error i get

    


    enter image description here

    


    Also, how can i run two ffmpeg commands concurrently to compress the video and add watermark at once without doing it seperately.
Thanks.