Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (39)

  • MediaSPIP : Modification des droits de création d’objets et de publication définitive

    11 novembre 2010, par

    Par défaut, MediaSPIP permet de créer 5 types d’objets.
    Toujours par défaut les droits de création et de publication définitive de ces objets sont réservés aux administrateurs, mais ils sont bien entendu configurables par les webmestres.
    Ces droits sont ainsi bloqués pour plusieurs raisons : parce que le fait d’autoriser à publier doit être la volonté du webmestre pas de l’ensemble de la plateforme et donc ne pas être un choix par défaut ; parce qu’avoir un compte peut servir à autre choses également, (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

Sur d’autres sites (7381)

  • Trying to capture display output for real-time analysis with OpenCV ; I need help with interfacing with the OS for input

    26 juillet 2024, par mirari

    I want to apply operations from the OpenCV computer vision library, in real time, to video captured from my computer display.
The idea in this particular case is to detect interesting features during gameplay in a popular game and provide the user with an enhanced experience ; but I could think of several other scenarios where one would want to have live access to this data as well. 
At any rate, for the development phase it might be acceptable using canned video, but for the final application performance and responsiveness are obviously critical.

    



    I am trying to do this on Ubuntu 10.10 as of now, and would prefer to use a UNIX-like system, but any options are of interest.
My C skills are very limited, so whenever talking to OpenCV through Python is possible, I try to use that instead.
Please note that I am trying to capture NOT from a camera device, but from a live stream of display output ; and I'm at a loss as to how to take the input. As far as I can tell, CaptureFromCAM works only for camera devices, and it seems to me that the requirement for real-time performance in the end result makes storage in file and reading back through CaptureFromFile a bad option.

    



    The most promising route I have found so far seems to be using ffmpeg with the x11grab option to capture from an X11 display ;
(e.g. the command
ffmpeg -f x11grab -sameq -r 25 -s wxga -i :0.0 out.mpg
captures 1366x768 of display 0 to 'out.mpg').
I imagine it should be possible to treat the output stream from ffmpeg as a file to be read by OpenCV (presumably by using the CaptureFromFile function) maybe by using pipes ; but this is all on a much higher level than I have ever dealt with before and I could really use some directions. 
Do you think this approach is feasible ? And more importantly can you think of a better one ? How would you do it ?

    


  • Exporting MPEG Transport Stream (.ts) keyframes to images in C/C++ ? Libavcodec / FFMPEG ?

    11 mai 2021, par CyberBully2003

    I have some buffers made up of 188 byte TS packets. When I write them to a file, I can successfully view these .ts files in a video player. MPEG-2/H.264 is the format of the Transport Streams.

    


    Now, I would like to export the keyframes from these Transport Streams buffers (or .ts files) as .jpeg or some other common image format in my C/C++ project.

    


    This is a trivial task from the command line using ffmpeg, where I just feed it the .ts file and some parameters.

    


    However, for the purpose of this project, I would like to accomplish this conversion/exporting of keyframes as images code-side in my current C/C++ directory because the raw bytes from these generated images will be put into another format.

    


    People online seem to recommend using libavcodec. There is an mpegets file in the ffmpeg source that seems like it might have some of the backend to do what I want.

    


    However, the steps needed to achieve this task using the library is not apparent.

    


    I know I could call ffmpeg from C++ and use stdin, but this isn't a preferred solution for this project.

    


    If someone could give me some guidance (and even better some example code) to accomplish this task, it would be greatly appreciated !

    


  • Unable to stream webcam buffer (incoming from react to nodejs via socket using mediaRecorder api) to rtmp url using ffmpeg

    26 septembre 2024, par Utkarsh Gangwar

    I have reactjs or simple vanilla client side code. Which is sending webcam feed as buffer via socket to our nodejs backend.

    


    Now at the backend, we are receiving that buffer, when we try to write that buffer as a file, it works.

    


    const WebSocket = require('ws');
const ffmpeg = require('fluent-ffmpeg');
const fs = require('fs');
const path = require('path');
const { PassThrough } = require('stream');

const wss = new WebSocket.Server({ port: 4000 });

wss.on('connection', ws => {
    console.log('Client connected');
    
    const passThrough = new PassThrough();
    const outputFilePath = path.join('output', `${Date.now()}.mp4`);
    
    ffmpeg()
        .input(passThrough)
        .inputFormat('webm')
        .output(outputFilePath)
        .on('end', () => {
            console.log('File has been written to disk');
        })
        .on('error', (err) => {
            console.error('Error:', err);
        })
        .run();
    
    ws.on('message', message => {
        passThrough.write(message);
    });

    ws.on('close', () => {
        passThrough.end();
        console.log('Client disconnected');
    });
});

console.log('WebSocket server running on ws://localhost:4000');


    


    The camera feed is written into a file.
    
We have tested the rtmp url using obs studio.
    
We have tried directly accessing mic and camera using ffmpeg cmd and tried streaming it -

    


    ffmpeg -f dshow -rtbufsize 100M  -i video="HD Webcam":audio="@device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{14A3EE7A-6A25-4CC6-95C3-F9A689E3415D}" -c:v libx264 -preset veryfast -b:v 1500k -c:a aac -b:a 128k -f flv rtmp://ip_address/live/stream_key


    


    which is also working and streaming to the rtmp url

    


    Now we are trying to send that buffer to rtmp, which is not working.

    


    const WebSocket = require('ws');
const ffmpeg = require('fluent-ffmpeg');
const { PassThrough } = require('stream');

// Replace this with your RTMP server URL
const RTMP_URL = 'rtmp://ip/live/steam_key'

const wss = new WebSocket.Server({ port: 4000 });

wss.on('connection', ws => {
    console.log('Client connected');

    const passThrough = new PassThrough();

    // Create a write stream for logging FFmpeg output
    const logStream = fs.createWriteStream('ffmpeg-log.txt', { flags: 'a' });

    // Set up FFmpeg to stream directly to RTMP server with logging
    const ffmpegProcess = ffmpeg()
        .input(passThrough)
        .inputFormat('webm') // Adjust if necessary
        .videoCodec('libx264')
        .audioCodec('aac')
        .format('flv') // Output format for RTMP
        .output(RTMP_URL)
        .on('start', commandLine => {
            console.log('FFmpeg process started with command:', commandLine);
        })
        .on('error', (err) => {
            console.error('FFmpeg error:', err);
        })
        .on('end', () => {
            console.log('FFmpeg process ended');
        })
        .run();

    // Capture FFmpeg stdout and stderr
    ffmpegProcess.stderr.on('data', (data) => {
        console.error('FFmpeg stderr:', data.toString());
    });

    ffmpegProcess.stdout.on('data', (data) => {
        console.log('FFmpeg stdout:', data.toString());
    });

    ws.on('message', message => {
        passThrough.write(message);
    });

    ws.on('close', () => {
        passThrough.end();
        console.log('Client disconnected');
        ffmpegProcess.kill('SIGINT'); // Stop FFmpeg process when client disconnects
    });
});

console.log('WebSocket server running on ws://localhost:4000');


    


    Can anyone suggest how to stream the webcam feed audio/video to the rtmp using nodejs. When we are getting continous buffer from client side.