Recherche avancée

Médias (91)

Autres articles (21)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • Other interesting software

    13 avril 2011, par

    We don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
    The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
    We don’t know them, we didn’t try them, but you can take a peek.
    Videopress
    Website : http://videopress.com/
    License : GNU/GPL v2
    Source code : (...)

  • Les statuts des instances de mutualisation

    13 mars 2010, par

    Pour des raisons de compatibilité générale du plugin de gestion de mutualisations avec les fonctions originales de SPIP, les statuts des instances sont les mêmes que pour tout autre objets (articles...), seuls leurs noms dans l’interface change quelque peu.
    Les différents statuts possibles sont : prepa (demandé) qui correspond à une instance demandée par un utilisateur. Si le site a déjà été créé par le passé, il est passé en mode désactivé. publie (validé) qui correspond à une instance validée par un (...)

Sur d’autres sites (4410)

  • Error : ffmpeg exited with code 1 : Error initializing complex filters. Not yet implemented in FFmpeg, patches welcome

    12 juin 2023, par Mutahhir Khan

    I got this objective

    


    


    write a function in typescript which takes images array of type

    


    


    interface Image {
    buffer: Buffer;
    start: number;
    end: number;
}


    


    


    and audio file of Buffer type, videoLength of type number.The function
should use fluent-ffmpeg to create a video by combining these images
based on their start and end time with a transition of fade filter.
the audio should be placed on video. the code will be running on ab
ubuntu linux distro. after the video generation is completed, saved
the video in root directory of project.

    


    


    import ffmpeg from "fluent-ffmpeg";
import * as fs from "fs";

interface Image {
    buffer: Buffer;
    start: number;
    end: number;
}

/**
 *
 * @param images contains the image buffers and their start and end times
 * @param audio  contains the audio buffer, save before use it
 * @param videoLength  contains the video length
 * @objective write a function in typescript which takes images array and audio file of Buffer type, videoLength of type number.
 * the function should use `fluent-ffmpeg` to create a video by combining these images
 * based on their start and end time with a transition of fade filter. the audio should be placed on video.
 * The code will be running on ab ubuntu linux distro. after the video generation is completed, saved the video in root directory of project.
 */

function createVideo(images: Image[], audio: Buffer, videoLength: number) {
    const command = ffmpeg();

    const audioPath = "audio.mp3";
    const durations: number[] = [];

    // Add images with start and end times
    images.forEach((image, index) => {
        const frameDuration = image.end - image.start; // Calculate duration of each frame
        durations.push(frameDuration);
        const imagePath = `image-${index}.png`;
        fs.writeFileSync(imagePath, image.buffer); // Create temporary image files from buffers

        command.input(imagePath).inputOptions(`-loop 1`).inputOptions(`-t ${frameDuration}`);
    });

    command.input(audioPath);

    fs.writeFileSync(audioPath, audio); // Create temporary audio file from buffer

    // Add audio
    command.input(audioPath).inputOptions(`-stream_loop -1`).inputOptions(`-t ${videoLength}`);

    // Add fade filter
    command.complexFilter([
        {
            filter: "fade",
            options: {
                enable: `'between(t,0,1)'`, // Notice the single quotes around the expression
                x: "810",
                y: "465",
            },
            inputs: `0:v`,
            outputs: `fade0`,
        },
        {
            filter: "fade",
            options: {
                enable: `'between(t,1,2)'`, // Notice the single quotes around the expression
                x: "810",
                y: "465",
            },
            inputs: `1:v`,
            outputs: `fade1`,
        },
        {
            filter: "fade",
            options: {
                enable: `'between(t,2,3)'`, // Notice the single quotes around the expression
                x: "810",
                y: "465",
            },
            inputs: `2:v`,
            outputs: `fade2`,
        },
    ]);


    // Add output options
    command.outputOptions([
        "-map [outv]",
        "-map [outa]",
        "-shortest",
    ]);

    // Add output file
    command.output("output.mp4");

    // Run ffmpeg
    command.run();

    // Delete temporary files
    // fs.unlinkSync(audioPath);
    // images.forEach((image, index) => {
    //     fs.unlinkSync(`image-${index}.png`);
    // });
}

export async function fluentFFmpeg() {
    try {
        // Usage example
        const images: Image[] = [
            { buffer: fs.readFileSync("image-0.png"), start: 0, end: 5 },
            { buffer: fs.readFileSync("image-1.png"), start: 6, end: 10 },
            { buffer: fs.readFileSync("image-2.png"), start: 11, end: 15 },
        ];

        const audioBuffer: Buffer = fs.readFileSync("audio.mp3");
        const videoLength: number = 15; // Duration of the resulting video
        const response = createVideo(images, audioBuffer, videoLength);
        console.log("response", response);
    } catch (error) {
        console.log("error", error);
    }
}




    


  • How to get file from cloud storage and process as local file without downloading ?

    23 octobre 2017, par Alex

    I am working on a project where I have to extract frames from a video by using ffmpeg (node.js). I first upload video to firebase storage from my client, and then I want to process it in the backend server. However, ffmpeg only accept file as if it is stored locally.

    const ff =new ffmpeg('C:/Users/alexh/Desktop/alex/name.avi');

    It will not work with url. I am wondering is any way I can get file from url as if it is stored locally or firebase can provide me a way to get the file ? I don’t want to use filebase trigger event because I want to send http request to backend server.

    Thank you so much

  • Live streaming and simultaneous local/server video saving with Insta360/Theta 360 camera [closed]

    13 août 2023, par Fornow

    I'm currently working on a project that involves live streaming video from a 360 camera, specifically the Insta360 and Theta models, while also saving the streamed video either locally or on a remote server. I'm relatively new to both live streaming and working with 360 cameras, so I'm seeking guidance on the best approach to achieve this.

    


    My primary goals are as follows :

    


      

    1. Live Streaming : I want to be able to stream the real-time video captured by the 360 camera to a web platform or application, allowing users to experience the immersive 360 content as it happens.

      


    2. 


    3. Simultaneous Video Saving : In addition to live streaming, I also need to save the streamed video. This can either be saved locally on the device running the streaming process or on a remote server. The saved video should ideally retain its 360 nature and high-quality resolution.

      


    4. 


    


    I've been researching various technologies and frameworks like WebRTC for live streaming, but I'm unsure about the compatibility and best practices when dealing specifically with 360 cameras like Insta360 and Theta. Additionally, I'm uncertain about the most efficient way to save the streamed video while maintaining its immersive properties.

    


    If anyone has experience with live streaming from 360 cameras and simultaneously saving the content, could you please provide insights into the following :

    


      

    • Recommended libraries, SDKs, or frameworks for live streaming 360 video from Insta360 or Theta cameras.
    • 


    • Tips for ensuring the streamed video retains its 360 attributes and high quality.
    • 


    • Best practices for saving the streamed video either locally or on a remote server while the live stream is ongoing.
    • 


    


    Any code examples, tutorials, or step-by-step guides would be greatly appreciated. Thank you in advance for your help !