Recherche avancée

Médias (0)

Mot : - Tags -/clipboard

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (111)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

Sur d’autres sites (6700)

  • Node.js readable maximize throughput/performance for compute intense readable - Writable doesn't pull data fast enough

    31 décembre 2022, par flohall

    General setup

    


    I developed an application using AWS Lambda node.js 14.
I use a custom Readable implementation FrameCreationStream that uses node-canvas to draw images, svgs and more on a canvas. This result is then extracted as a raw image buffer in BGRA. A single image buffer contains 1920 * 1080 * 4 Bytes = 8294400 Bytes 8 MB.
This is then piped to stdin of a child_process running ffmpeg.
The highWaterMark of my Readable in objectMode:true is set to 25 so that the internal buffer can use up to 8 MB * 25 = 200 MB.

    


    All this works fine and also doesn't contain too much RAM. But I noticed after some time, that the performance is not ideally.

    


    Performance not optimal

    


    I have an example input that generates a video of 315 frames. If I set highWaterMark to a value above 25 the performance increases to the point, when I set to a value of 315 or above.

    


    For some reason ffmpeg doesn't start to pull any data until highWaterMark is reached. Obviously thats not what I want. ffmpeg should always consume data if minimum 1 frame is cached in the Readable and if it has finished processing the frame before. And the Readable should produce more frames as long highWaterMark isn't reached or the last frame has been reached. So ideally the Readable and the Writeable are busy all the time.

    


    I found another way to improve the speed. If I add a timeout in the _read() method of the Readable after let's say every tenth frame for 100 ms. Then the ffmpeg-Writable will use this timeout to write some frames to ffmpeg.

    


    It seems like frames aren't passed to ffmpeg during frame creation because some node.js main thread is busy ?

    


    The fastest result I have if I increase highWaterMark above the amount of frames - which doesn't work for longer videos as this would make the AWS Lambda RAM explode. And this makes the whole streaming idea useless. Using timeouts always gives me stomach pain. Also depending on the execution on different environments a good fitting timeout might differ. Any ideas ?

    


    FrameCreationStream

    


    import canvas from &#x27;canvas&#x27;;&#xA;import {Readable} from &#x27;stream&#x27;;&#xA;import {IMAGE_STREAM_BUFFER_SIZE, PerformanceUtil, RenderingLibraryError, VideoRendererInput} from &#x27;vm-rendering-backend-commons&#x27;;&#xA;import {AnimationAssets, BufferType, DrawingService, FullAnimationData} from &#x27;vm-rendering-library&#x27;;&#xA;&#xA;/**&#xA; * This is a proper back pressure compatible implementation of readable for a having a stream to read single frames from.&#xA; * Whenever read() is called a new frame is created and added to the stream.&#xA; * read() will be called internally until options.highWaterMark has been reached.&#xA; * then calling read will be paused until one frame is read from the stream.&#xA; */&#xA;export class FrameCreationStream extends Readable {&#xA;&#xA;    drawingService: DrawingService;&#xA;    endFrameIndex: number;&#xA;    currentFrameIndex: number = 0;&#xA;    startFrameIndex: number;&#xA;    frameTimer: [number, number];&#xA;    readTimer: [number, number];&#xA;    fullAnimationData: FullAnimationData;&#xA;&#xA;    constructor(animationAssets: AnimationAssets, fullAnimationData: FullAnimationData, videoRenderingInput: VideoRendererInput, frameTimer: [number, number]) {&#xA;        super({highWaterMark: IMAGE_STREAM_BUFFER_SIZE, objectMode: true});&#xA;&#xA;        this.frameTimer = frameTimer;&#xA;        this.readTimer = PerformanceUtil.startTimer();&#xA;&#xA;        this.fullAnimationData = fullAnimationData;&#xA;&#xA;        this.startFrameIndex = Math.floor(videoRenderingInput.startFrameId);&#xA;        this.currentFrameIndex = this.startFrameIndex;&#xA;        this.endFrameIndex = Math.floor(videoRenderingInput.endFrameId);&#xA;&#xA;        this.drawingService = new DrawingService(animationAssets, fullAnimationData, videoRenderingInput, canvas);&#xA;        console.time("read");&#xA;    }&#xA;&#xA;    /**&#xA;     * this method is only overwritten for debugging&#xA;     * @param size&#xA;     */&#xA;    read(size?: number): string | Buffer {&#xA;&#xA;        console.log("read("&#x2B;size&#x2B;")");&#xA;        const buffer = super.read(size);&#xA;        console.log(buffer);&#xA;        console.log(buffer?.length);&#xA;        if(buffer) {&#xA;            console.timeLog("read");&#xA;        }&#xA;        return buffer;&#xA;    }&#xA;&#xA;    // _read() will be called when the stream wants to pull more data in.&#xA;    // _read() will be called again after each call to this.push(dataChunk) once the stream is ready to accept more data. https://nodejs.org/api/stream.html#readable_readsize&#xA;    // this way it is ensured, that even though this.createImageBuffer() is async, only one frame is created at a time and the order is kept&#xA;    _read(): void {&#xA;        // as frame numbers are consecutive and unique, we have to draw each frame number (also the first and the last one)&#xA;        if (this.currentFrameIndex &lt;= this.endFrameIndex) {&#xA;            PerformanceUtil.logTimer(this.readTimer, &#x27;WAIT   -> READ\t&#x27;);&#xA;            this.createImageBuffer()&#xA;                 .then(buffer => this.optionalTimeout(buffer))&#xA;                // push means adding a buffered raw frame to the stream&#xA;                .then((buffer: Buffer) => {&#xA;                    this.readTimer = PerformanceUtil.startTimer();&#xA;                    // the following two frame numbers start with 1 as first value&#xA;                    const processedFrameNumberOfScene = 1 &#x2B; this.currentFrameIndex - this.startFrameIndex;&#xA;                    const totalFrameNumberOfScene = 1 &#x2B; this.endFrameIndex - this.startFrameIndex;&#xA;                    // the overall frameId or frameIndex starts with frameId 0&#xA;                    const processedFrameIndex = this.currentFrameIndex;&#xA;                    this.currentFrameIndex&#x2B;&#x2B;;&#xA;                    this.push(buffer); // nothing besides logging should happen after calling this.push(buffer)&#xA;                    console.log(processedFrameNumberOfScene &#x2B; &#x27; of &#x27; &#x2B; totalFrameNumberOfScene &#x2B; &#x27; processed - full video frameId: &#x27; &#x2B; processedFrameIndex &#x2B; &#x27; - buffered frames: &#x27; &#x2B; this.readableLength);&#xA;                })&#xA;                .catch(err => {&#xA;                    // errors will be finally handled, when subscribing to frameCreation stream in ffmpeg service&#xA;                    // this log is just generated for tracing errors and if for some reason the handling in ffmpeg service doesn&#x27;t work&#xA;                    console.log("createImageBuffer: ", err);&#xA;                    this.emit("error", err);&#xA;                });&#xA;        } else {&#xA;            // push(null) makes clear that this stream has ended&#xA;            this.push(null);&#xA;            PerformanceUtil.logTimer(this.frameTimer, &#x27;FRAME_STREAM&#x27;);&#xA;        }&#xA;    }&#xA;&#xA;    private optionalTimeout(buffer: Buffer): Promise<buffer> {&#xA;        if(this.currentFrameIndex % 10 === 0) {&#xA;            return new Promise(resolve => setTimeout(() => resolve(buffer), 140));&#xA;        }&#xA;        return Promise.resolve(buffer);&#xA;    }&#xA;&#xA;    // prevent memory leaks - without this lambda memory will increase with every call&#xA;    _destroy(): void {&#xA;        this.drawingService.destroyStage();&#xA;    }&#xA;&#xA;    /**&#xA;     * This creates a raw pixel buffer that contains a single frame of the video drawn by the rendering library&#xA;     *&#xA;     */&#xA;    public async createImageBuffer(): Promise<buffer> {&#xA;&#xA;        const drawTimer = PerformanceUtil.startTimer();&#xA;        try {&#xA;            await this.drawingService.drawForFrame(this.currentFrameIndex);&#xA;        } catch (err: any) {&#xA;            throw new RenderingLibraryError(err);&#xA;        }&#xA;&#xA;        PerformanceUtil.logTimer(drawTimer, &#x27;DRAW   -> FRAME\t&#x27;);&#xA;&#xA;        const bufferTimer = PerformanceUtil.startTimer();&#xA;        // Creates a raw pixel buffer, containing simple binary data&#xA;        // the exact same information (BGRA/screen ratio) has to be provided to ffmpeg, because ffmpeg cannot detect format for raw input&#xA;        const buffer = await this.drawingService.toBuffer(BufferType.RAW);&#xA;        PerformanceUtil.logTimer(bufferTimer, &#x27;CANVAS -> BUFFER&#x27;);&#xA;&#xA;        return buffer;&#xA;    }&#xA;}&#xA;</buffer></buffer>

    &#xA;

    FfmpegService

    &#xA;

    import {ChildProcess, execFile} from &#x27;child_process&#x27;;&#xA;import {Readable} from &#x27;stream&#x27;;&#xA;import {FPS, StageSize} from &#x27;vm-rendering-library&#x27;;&#xA;import {&#xA;    FfmpegError,&#xA;    LOCAL_MERGE_VIDEOS_TEXT_FILE, LOCAL_SOUND_FILE_PATH,&#xA;    LOCAL_VIDEO_FILE_PATH,&#xA;    LOCAL_VIDEO_SOUNDLESS_MERGE_FILE_PATH&#xA;} from "vm-rendering-backend-commons";&#xA;&#xA;/**&#xA; * This class bundles all ffmpeg usages for rendering one scene.&#xA; * FFmpeg is a console program which can transcode nearly all types of sounds, images and videos from one to another.&#xA; */&#xA;export class FfmpegService {&#xA;&#xA;    ffmpegPath: string = null;&#xA;&#xA;&#xA;    constructor(ffmpegPath: string) {&#xA;        this.ffmpegPath = ffmpegPath;&#xA;    }&#xA;&#xA;    /**&#xA;     * Convert a stream of raw images into an .mp4 video using the command line program ffmpeg.&#xA;     *&#xA;     * @param inputStream an input stream containing images in raw format BGRA&#xA;     * @param stageSize the size of a single frame in pixels (minimum is 2*2)&#xA;     * @param outputPath the filepath to write the resulting video to&#xA;     */&#xA;    public imageToVideo(inputStream: Readable, stageSize: StageSize, outputPath: string): Promise<void> {&#xA;        const args: string[] = [&#xA;            &#x27;-f&#x27;,&#xA;            &#x27;rawvideo&#x27;,&#xA;            &#x27;-r&#x27;,&#xA;            `${FPS}`,&#xA;            &#x27;-pix_fmt&#x27;,&#xA;            &#x27;bgra&#x27;,&#xA;            &#x27;-s&#x27;,&#xA;            `${stageSize.width}x${stageSize.height}`,&#xA;            &#x27;-i&#x27;,&#xA;            // input "-" means input will be passed via pipe (streamed)&#xA;            &#x27;-&#x27;,&#xA;            // codec that also QuickTime player can understand&#xA;            &#x27;-vcodec&#x27;,&#xA;            &#x27;libx264&#x27;,&#xA;            &#x27;-pix_fmt&#x27;,&#xA;            &#x27;yuv420p&#x27;,&#xA;            /*&#xA;                * "-movflags faststart":&#xA;                * metadata at beginning of file&#xA;                * needs more RAM&#xA;                * file will be broken, if not finished properly&#xA;                * higher application compatibility&#xA;                * better for browser streaming&#xA;            */&#xA;            &#x27;-movflags&#x27;,&#xA;            &#x27;faststart&#x27;,&#xA;            // "-preset ultrafast", //use this to speed up compression, but quality/compression ratio gets worse&#xA;            // don&#x27;t overwrite an existing file here,&#xA;            // but delete file in the beginning of execution index.ts&#xA;            // (this is better for local testing believe me)&#xA;            outputPath&#xA;        ];&#xA;&#xA;        return this.execFfmpegPromise(args, inputStream);&#xA;    }&#xA;&#xA;    private execFfmpegPromise(args: string[], inputStream?: Readable): Promise<void> {&#xA;        const ffmpegServiceSelf = this;&#xA;        return new Promise(function (resolve, reject) {&#xA;            const executionProcess: ChildProcess = execFile(ffmpegServiceSelf.ffmpegPath, args, (err) => {&#xA;                if (err) {&#xA;                    reject(new FfmpegError(err));&#xA;                } else {&#xA;                    console.log(&#x27;ffmpeg finished&#x27;);&#xA;                    resolve();&#xA;                }&#xA;            });&#xA;            if (inputStream) {&#xA;                // it&#x27;s important to listen on errors of input stream before piping it into the write stream&#xA;                // if we don&#x27;t do this here, we get an unhandled promise exception for every issue in the input stream&#xA;                inputStream.on("error", err => {&#xA;                    reject(err);&#xA;                });&#xA;                // don&#x27;t reject promise here as the error will also be thrown inside execFile and will contain more debugging info&#xA;                // this log is just generated for tracing errors and if for some reason the handling in execFile doesn&#x27;t work&#xA;                inputStream.pipe(executionProcess.stdin).on("error", err => console.log("pipe stream: " , err));&#xA;            }&#xA;        });&#xA;    }&#xA;}&#xA;</void></void>

    &#xA;

  • What should be the timing to capture screenshots as images and then convert the images to video file using ffmpeg ?

    22 juillet 2016, par TheLost Lostit

    I have a timer tick event where i take screenshots every 10ms

           int count = 0;
           private void timer1_Tick(object sender, EventArgs e)
           {
               Bitmap bmp = new Bitmap(sc.CaptureScreen());
               bmp.Save(@"D:\SavedScreenshots\screenshot" + count + ".bmp", System.Drawing.Imaging.ImageFormat.Bmp);
               bmp.Dispose();
               count ++;
           }

    Then i i’m using ffmpeg command line in command prompt window to create a video file from all the images :

    ffmpeg -framerate 2 -i screenshot%d.bmp -c:v libx264 -r 30  -pix_fmt yuv420p out.mp4

    Every bitmap file on hard disk it’s size is 7.91 MB
    The details : 1920x 1080 and Bit depth 32

    The problem is when making the ffmpeg command line with -framerate 2 then when i play the video file in windows media player it’s very very slow.
    When i saet the framerate to 10 then it’s too fast.
    When i set the framerate to 4 i’m getting error in yellow say too large.

    But maybe the problem is that i’m taking a screenshot every 10ms ? Maybe i should take a screenshot every 1000ms ? And then what should i change in the ffmpeg command line ?

    I want it to be like a regular video file speed. Not too fast and not too slow.
    What i’m capturing in the screenshot is my desktop screen and later i want to upload and show it to some support help in a forum.

  • How can i use ffmpeg.exe command line to create a video file from number of images on hard disk ?

    22 juillet 2016, par TheLost Lostit

    In command prompt window i typed in the directory of the images :

    D :\SavedScreenshots>ffmpeg.exe -r 3 -i Imgp%04d.bmp -s 720x480 test.avi

    The images file are Bmp type.
    The first image file name is : screenshot0.bmp
    The last one is : screenshot168.bmp

    Example of one image details : Width 1920 Height 1080 Bit Depth 32

    The ffmpeg.exe file is in the same directory of the images.

    In the prompt windows console ouput i see :

    [image2 @ 00000000025624a0] Could find no file with path ’Imgp%04d.bmp’ and index in the range 0-4
    Imgp%04d.bmp : No such file or directory

    Then how should i do it the command line ?

    I found the problem and now it’s working but it’s very strange.

    In c# i create the screenshots of my desktop this images on the hard disk i want to create video file from.
    In c# i did in a timer tick event :

    int count = 0;
           private void timer1_Tick(object sender, EventArgs e)
           {
               Bitmap bmp = new Bitmap(sc.CaptureScreen());
               bmp.Save(@"D:\SavedScreenshots\screenshot" + count + ".bmp");
               bmp.Dispose();
               count ++;
           }

    This saved the images on the hard disk all of them in sizes between 129-132kb each file. I could edit/open the images and see them no problems but ffmpeg could not handle them gave me this error.

    Now i changed the code in the c# to this :

    bmp.Save(@"D:\SavedScreenshots\screenshot" + count + ".bmp", System.Drawing.Imaging.ImageFormat.Bmp);

    I added the part : System.Drawing.Imaging.ImageFormat.Bmp

    Now each image file it’s size is about 7MB !!!!
    And now ffmpeg can handle it with the command line :

    ffmpeg.exe -framerate 3 -i screenshot%d.bmp -vf scale=ntsc,setdar=16/9 test.avi

    I wonder why when the images size was 130KB ffmpeg couldn not handle it didn’t find the files or directory and when they are 7MB it does find and create the video file ?

    Even now when i type as command line :

    ffmpeg  -i screenshot%03.bmp -s 720x480 test.avi

    I’m getting erorr not such file or directory

    Only when i type :

    ffmpeg  -i screenshot%d.bmp -s 720x480 test.avi

    It’s working.

    Why when doing screenshot%3.bmp it’s not working but screenshot%d.bmp does working ?

    Also doing screenshot0.bmp worked. Only screenshot%3.bmp not working.
    And in all examples i saw i had to make screenshot%3 or %2 but they give me the error no such directory file.