Recherche avancée

Médias (1)

Mot : - Tags -/lev manovitch

Autres articles (43)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

Sur d’autres sites (6039)

  • Live audio using ffmpeg, javascript and nodejs

    8 novembre 2017, par klaus

    I am new to this thing. Please don’t hang me for the poor grammar. I am trying to create a proof of concept application which I will later extend. It does the following : We have a html page which asks for permission to use the microphone. We capture the microphone input and send it via websocket to a node js app.

    JS (Client) :

    var bufferSize = 4096;
    var socket = new WebSocket(URL);
    var myPCMProcessingNode = context.createScriptProcessor(bufferSize, 1, 1);
    myPCMProcessingNode.onaudioprocess = function(e) {
     var input = e.inputBuffer.getChannelData(0);
     socket.send(convertFloat32ToInt16(input));
    }

    function convertFloat32ToInt16(buffer) {
     l = buffer.length;
     buf = new Int16Array(l);
     while (l--) {
       buf[l] = Math.min(1, buffer[l])*0x7FFF;
     }
     return buf.buffer;
    }

    navigator.mediaDevices.getUserMedia({audio:true, video:false})
                                   .then(function(stream){
                                     var microphone = context.createMediaStreamSource(stream);
                                     microphone.connect(myPCMProcessingNode);
                                     myPCMProcessingNode.connect(context.destination);
                                   })
                                   .catch(function(e){});

    In the server we take each incoming buffer, run it through ffmpeg, and send what comes out of the std out to another device using the node js ’http’ POST. The device has a speaker. We are basically trying to create a 1 way audio link from the browser to the device.

    Node JS (Server) :

    var WebSocketServer = require('websocket').server;
    var http = require('http');
    var children = require('child_process');

    wsServer.on('request', function(request) {
     var connection = request.accept(null, request.origin);
     connection.on('message', function(message) {
       if (message.type === 'utf8') { /*NOP*/ }
       else if (message.type === 'binary') {
         ffm.stdin.write(message.binaryData);
       }
     });
     connection.on('close', function(reasonCode, description) {});
     connection.on('error', function(error) {});
    });

    var ffm = children.spawn(
       './ffmpeg.exe'
      ,'-stdin -f s16le -ar 48k -ac 2 -i pipe:0 -acodec pcm_u8 -ar 48000 -f aiff pipe:1'.split(' ')
    );

    ffm.on('exit',function(code,signal){});

    ffm.stdout.on('data', (data) => {
     req.write(data);
    });

    var options = {
     host: 'xxx.xxx.xxx.xxx',
     port: xxxx,
     path: '/path/to/service/on/device',
     method: 'POST',
     headers: {
      'Content-Type': 'application/octet-stream',
      'Content-Length': 0,
      'Authorization' : 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
      'Transfer-Encoding' : 'chunked',
      'Connection': 'keep-alive'
     }
    };

    var req = http.request(options, function(res) {});

    The device supports only continuous POST and only a couple of formats (ulaw, aiff, wav)

    This solution doesn’t seem to work. In the device speaker we only hear something like white noise.

    Also, I think I may have a problem with the buffer I am sending to the ffmpeg std in -> Tried to dump whatever comes out of the websocket to a .wav file then play it with VLC -> it plays everything in the record very fast -> 10 seconds of recording played in about 1 second.

    I am new to audio processing and have searched for about 3 days now for solutions on how to improve this and found nothing.

    I would ask from the community for 2 things :

    1. Is something wrong with my approach ? What more can I do to make this work ? I will post more details if required.

    2. If what I am doing is reinventing the wheel then I would like to know what other software / 3rd party service (like amazon or whatever) can accomplish the same thing.

    Thank you.

  • How to feed frames one by one and grab decoded frames using LibAV ?

    22 septembre 2024, par Alvan Rahimli

    I am integrating a third party system where I send a TCP packet to ask e.g. 5 frames from a live CCTV camera stream, and it sends those 5 frames one by one. Each package is h264 encoded frame, wrapped with some relevant data.

    


    I need to write a code using Libav where I can :

    


      

    1. Feed the frames one by one using AVIOContext (or smth similar).
    2. 


    3. Grab decoded frame.
    4. 


    5. Draw that to a window (Not relevant to the question, writing for context).
    6. 


    


    I've been doing the same with GStreamer by creating a pipeline like this :

    


    AppSrc -> H264Parser -> H264Decoder -> FrameGrabber


    


    The code below is what I was able to write so far :

    


    using System.Runtime.InteropServices;
using FFmpeg.AutoGen;

namespace AvIoCtxExperiment;

public static unsafe class Program
{
    public static void Main()
    {
        ffmpeg.avdevice_register_all();
        Console.WriteLine($"FFmpeg v: {ffmpeg.av_version_info()}");

        // Generous buffer size for I frames (~43KB)
        const int bufferSize = 50 * 1024;

        var buff = (byte*)ffmpeg.av_malloc(bufferSize);
        if (buff == null)
            throw new Exception("Buffer is null");

        // A mock frame provider. Frames are stored in separate files and this reads & returns them one-by-one.
        var frameProvider = new FrameProvider(@"D:\Frames-1", 700);
        var gch = GCHandle.Alloc(frameProvider);

        var avioCtx = ffmpeg.avio_alloc_context(
            buffer: buff,
            buffer_size: bufferSize,
            write_flag: 0,
            opaque: (void*)GCHandle.ToIntPtr(gch),
            read_packet: new avio_alloc_context_read_packet(ReadFunc),
            write_packet: null,
            seek: null);

        var formatContext = ffmpeg.avformat_alloc_context();
        formatContext->pb = avioCtx;
        formatContext->flags |= ffmpeg.AVFMT_FLAG_CUSTOM_IO;

        var openResult = ffmpeg.avformat_open_input(&formatContext, null, null, null);
        if (openResult < 0)
            throw new Exception("Open Input Failed");

        if (ffmpeg.avformat_find_stream_info(formatContext, null) < 0)
            throw new Exception("Find StreamInfo Failed");

        AVPacket packet;
        while (ffmpeg.av_read_frame(formatContext, &packet) >= 0)
        {
            Console.WriteLine($"GRAB: {packet.buf->size}");
            ffmpeg.av_packet_unref(&packet);
        }
    }

    private static int ReadFunc(void* opaque, byte* buf, int bufSize)
    {
        var frameProvider = (FrameProvider?)GCHandle.FromIntPtr((IntPtr)opaque).Target;
        if (frameProvider == null)
        {
            return 0;
        }

        byte[] managedBuffer = new byte[bufSize];

        var fBuff = frameProvider.NextFrame();
        if (fBuff == null)
        {
            return ffmpeg.AVERROR_EOF;
        }

        int bytesRead = fBuff.Length;
        fBuff.CopyTo(managedBuffer, 0);

        if (bytesRead > 0)
        {
            Marshal.Copy(managedBuffer, 0, (IntPtr)buf, bytesRead);
            Console.WriteLine($"READ size: {fBuff.Length}");
            return bytesRead;
        }

        return ffmpeg.AVERROR_EOF;
    }
}


    


    Second thing that confuses me is with AppSrc, I can feed frame of any size, but with LibAV, it asks for buffer size. With AppSrc, I am responsible of feeding frames into the pipeline. GStreamer notifies me when it is enough with signals. But with libav, it calls read_packet delegate itself.

    


    Any help is much appreciated. Thanks.

    


    P.S. I'm writing a C# app, but sample code in C is fine, I can adapt the code myself.

    


  • How to feed frames one by one and grab decoded frames using LibAV ?

    15 septembre 2024, par Alvan Rahimli

    I am integrating a third party system where I send a TCP packet to ask e.g. 5 frames from a live CCTV camera stream, and it sends those 5 frames one by one. Each package is h264 encoded frame, wrapped with some relevant data.

    


    I need to write a code using Libav where I can :

    


      

    1. Feed the frames one by one using AVIOContext (or smth similar).
    2. 


    3. Grab decoded frame.
    4. 


    5. Draw that to a window (Not relevant to the question, writing for context).
    6. 


    


    I've been doing the same with GStreamer by creating a pipeline like this :

    


    AppSrc -> H264Parser -> H264Decoder -> FrameGrabber


    


    The code below is what I was able to write so far :

    


    using System.Runtime.InteropServices;
using FFmpeg.AutoGen;

namespace AvIoCtxExperiment;

public static unsafe class Program
{
    public static void Main()
    {
        ffmpeg.avdevice_register_all();
        Console.WriteLine($"FFmpeg v: {ffmpeg.av_version_info()}");

        // Generous buffer size for I frames (~43KB)
        const int bufferSize = 50 * 1024;

        var buff = (byte*)ffmpeg.av_malloc(bufferSize);
        if (buff == null)
            throw new Exception("Buffer is null");

        // A mock frame provider. Frames are stored in separate files and this reads & returns them one-by-one.
        var frameProvider = new FrameProvider(@"D:\Frames-1", 700);
        var gch = GCHandle.Alloc(frameProvider);

        var avioCtx = ffmpeg.avio_alloc_context(
            buffer: buff,
            buffer_size: bufferSize,
            write_flag: 0,
            opaque: (void*)GCHandle.ToIntPtr(gch),
            read_packet: new avio_alloc_context_read_packet(ReadFunc),
            write_packet: null,
            seek: null);

        var formatContext = ffmpeg.avformat_alloc_context();
        formatContext->pb = avioCtx;
        formatContext->flags |= ffmpeg.AVFMT_FLAG_CUSTOM_IO;

        var openResult = ffmpeg.avformat_open_input(&formatContext, null, null, null);
        if (openResult < 0)
            throw new Exception("Open Input Failed");

        if (ffmpeg.avformat_find_stream_info(formatContext, null) < 0)
            throw new Exception("Find StreamInfo Failed");

        AVPacket packet;
        while (ffmpeg.av_read_frame(formatContext, &packet) >= 0)
        {
            Console.WriteLine($"GRAB: {packet.buf->size}");
            ffmpeg.av_packet_unref(&packet);
        }
    }

    private static int ReadFunc(void* opaque, byte* buf, int bufSize)
    {
        var frameProvider = (FrameProvider?)GCHandle.FromIntPtr((IntPtr)opaque).Target;
        if (frameProvider == null)
        {
            return 0;
        }

        byte[] managedBuffer = new byte[bufSize];

        var fBuff = frameProvider.NextFrame();
        if (fBuff == null)
        {
            return ffmpeg.AVERROR_EOF;
        }

        int bytesRead = fBuff.Length;
        fBuff.CopyTo(managedBuffer, 0);

        if (bytesRead > 0)
        {
            Marshal.Copy(managedBuffer, 0, (IntPtr)buf, bytesRead);
            Console.WriteLine($"READ size: {fBuff.Length}");
            return bytesRead;
        }

        return ffmpeg.AVERROR_EOF;
    }
}


    


    Second thing that confuses me is with AppSrc, I can feed frame of any size, but with LibAV, it asks for buffer size. With AppSrc, I am responsible of feeding frames into the pipeline. GStreamer notifies me when it is enough with signals. But with libav, it calls read_packet delegate itself.

    


    Any help is much appreciated. Thanks.

    


    P.S. I'm writing a C# app, but sample code in C is fine, I can adapt the code myself.