Recherche avancée

Médias (91)

Autres articles (30)

  • Soumettre améliorations et plugins supplémentaires

    10 avril 2011

    Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
    Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

Sur d’autres sites (6144)

  • Live audio using ffmpeg, javascript and nodejs

    8 novembre 2017, par klaus

    I am new to this thing. Please don’t hang me for the poor grammar. I am trying to create a proof of concept application which I will later extend. It does the following : We have a html page which asks for permission to use the microphone. We capture the microphone input and send it via websocket to a node js app.

    JS (Client) :

    var bufferSize = 4096;
    var socket = new WebSocket(URL);
    var myPCMProcessingNode = context.createScriptProcessor(bufferSize, 1, 1);
    myPCMProcessingNode.onaudioprocess = function(e) {
     var input = e.inputBuffer.getChannelData(0);
     socket.send(convertFloat32ToInt16(input));
    }

    function convertFloat32ToInt16(buffer) {
     l = buffer.length;
     buf = new Int16Array(l);
     while (l--) {
       buf[l] = Math.min(1, buffer[l])*0x7FFF;
     }
     return buf.buffer;
    }

    navigator.mediaDevices.getUserMedia({audio:true, video:false})
                                   .then(function(stream){
                                     var microphone = context.createMediaStreamSource(stream);
                                     microphone.connect(myPCMProcessingNode);
                                     myPCMProcessingNode.connect(context.destination);
                                   })
                                   .catch(function(e){});

    In the server we take each incoming buffer, run it through ffmpeg, and send what comes out of the std out to another device using the node js ’http’ POST. The device has a speaker. We are basically trying to create a 1 way audio link from the browser to the device.

    Node JS (Server) :

    var WebSocketServer = require('websocket').server;
    var http = require('http');
    var children = require('child_process');

    wsServer.on('request', function(request) {
     var connection = request.accept(null, request.origin);
     connection.on('message', function(message) {
       if (message.type === 'utf8') { /*NOP*/ }
       else if (message.type === 'binary') {
         ffm.stdin.write(message.binaryData);
       }
     });
     connection.on('close', function(reasonCode, description) {});
     connection.on('error', function(error) {});
    });

    var ffm = children.spawn(
       './ffmpeg.exe'
      ,'-stdin -f s16le -ar 48k -ac 2 -i pipe:0 -acodec pcm_u8 -ar 48000 -f aiff pipe:1'.split(' ')
    );

    ffm.on('exit',function(code,signal){});

    ffm.stdout.on('data', (data) => {
     req.write(data);
    });

    var options = {
     host: 'xxx.xxx.xxx.xxx',
     port: xxxx,
     path: '/path/to/service/on/device',
     method: 'POST',
     headers: {
      'Content-Type': 'application/octet-stream',
      'Content-Length': 0,
      'Authorization' : 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
      'Transfer-Encoding' : 'chunked',
      'Connection': 'keep-alive'
     }
    };

    var req = http.request(options, function(res) {});

    The device supports only continuous POST and only a couple of formats (ulaw, aiff, wav)

    This solution doesn’t seem to work. In the device speaker we only hear something like white noise.

    Also, I think I may have a problem with the buffer I am sending to the ffmpeg std in -> Tried to dump whatever comes out of the websocket to a .wav file then play it with VLC -> it plays everything in the record very fast -> 10 seconds of recording played in about 1 second.

    I am new to audio processing and have searched for about 3 days now for solutions on how to improve this and found nothing.

    I would ask from the community for 2 things :

    1. Is something wrong with my approach ? What more can I do to make this work ? I will post more details if required.

    2. If what I am doing is reinventing the wheel then I would like to know what other software / 3rd party service (like amazon or whatever) can accomplish the same thing.

    Thank you.

  • Failed to decode HLS by FFMpeg command. Invalid NAL unit 0

    9 mars 2024, par Fyodor Khruschov

    On front-end I create stream with chrome.tabCapture.capture or navigator.mediaDevices.getDisplayMedia methods. Then send chunks generated by MediaRecorder to server. On the server I have FFMpeg command which decodes chunks into .mp4 file. This is the command :

    


    ffmpeg -y -i - -preset veryfast -tune zerolatency -filter_complex [0:v]split=3[v1][v2][v3];[v1]scale=w=-2:h=1080,fps=30[v1out];[v2]scale=w=-2:h=720,fps=30[v2out];[v3]scale=w=-2:h=480,fps=30[v3out] -map [v1out] -maxrate:0 6M -bufsize:0 12M -keyint_min 100 -g 100 -map [v2out] -maxrate:1 3M -bufsize:1 6M -keyint_min 100 -g 100 -map [v3out] -maxrate:2 1.5M -bufsize:2 3M -keyint_min 100 -g 100 -c:v libx264 -map a:0 -c:a:0 aac -b:a:0 128k -ac 2 -map a:0 -c:a:1 aac -b:a:1 96k -map a:0 -c:a:2 aac -b:a:2 96k -f hls -hls_time 2 -hls_playlist_type vod -hls_flags independent_segments+temp_file -hls_segment_type fmp4 -hls_segment_filename ./output/ready/output_%v_%03d.m4s -var_stream_map v:0,a:0 v:1,a:1 v:2,a:2 -master_pl_name master.m3u8 ./output/ready/stream_%v.m3u8 -map 0:v:0 -map 0:a:0 -c:v copy -c:a aac ./output/download/video.mp4 -map 0:a:0 -ar 16000 -ac 1 -c:a pcm_s16le ./output/captions/audio.wav -loglevel info


    


    During the process of decoding I have these errors in logs :

    


    [extract_extradata @ 0x60000264b250] Invalid NAL unit 0, skipping.
[h264 @ 0x13ff04e60] Invalid NAL unit 0, skipping.
[h264 @ 0x13ff04e60] co located POCs unavailable
[h264 @ 0x13ff04e60] negative number of zero coeffs at 17 0
[h264 @ 0x13ff04e60] error while decoding MB 17 0
[h264 @ 0x13ff04e60] concealing 3388 DC, 3388 AC, 3388 MV errors in B frame
[h264 @ 0x13ff04e60] missing picture in access unit with size 24158
[h264 @ 0x13ff04e60] Invalid NAL unit 0, skipping.
[h264 @ 0x13ff04e60] data partitioning is not implemented. Update your FFmpeg version to the newest one from Git. If the problem still occurs, it means that your file has a feature which has not been implemented.
[h264 @ 0x13ff04e60] If you want to help, upload a sample of this file to https://streams.videolan.org/upload/ and contact the ffmpeg-devel mailing list. (ffmpeg-devel@ffmpeg.org)
[h264 @ 0x13ff04e60] no frame!
[h264 @ 0x13ff04e60] Unknown SAR index: 18.
[h264 @ 0x13ff04e60] Invalid NAL unit 0, skipping.
[h264 @ 0x13ff04e60] Unknown SAR index: 18.
[h264 @ 0x13ff04e60] number of reference frames (2+4) exceeds max (5; probably corrupt input), discarding one
[h264 @ 0x13ff04e60] number of reference frames (3+3) exceeds max (5; probably corrupt input), discarding one
[h264 @ 0x13ff04e60] number of reference frames (4+2) exceeds max (5; probably corrupt input), discarding one
[h264 @ 0x13ff04e60] FMO is not implemented. Update your FFmpeg version to the newest one from Git. If the problem still occurs, it means that your file has a feature which has not been implemented.
[h264 @ 0x13ff04e60] sps_id 4 out of range


    


    This issue is very inconsistent and happen in rare cases (I can't understand the logic). Most of the time chunks decoded successfully, but sometimes not.

    


    How to understand where the issue is coming from ? Is it possible for FFMpeg to skip wrong data and generate mp4 file anyway even with glitches, but don't crush ?

    


  • How to save audio chunks from client to ffmpeg readable file ?

    22 septembre 2023, par LuckOverflow

    I am live recording audio data from a TS React front-end and need to send it to the server, where it can be saved to a file so that ffmpeg can mix it. The front-end saves the mic data to a blob with type "mimeType : "audio/webm ; codecs=opus" when printed in the browser terminal. I send the exact object that I printed to the server, where logging it indicates it is a, or was passed as a, "Buffer" object.

    


    I have tried saving that Buffer as a webm file, but when I pass that file as an input to ffmpeg ffprobe, I get the error "Format matroska,webm detected only with a low score of 1..." and "EBML header parsing failed.." "Invalid data found when processing input." I have tried several other formats to no success.

    


    I need a way to transform this Buffer object to an audio file that can be mixed by ffmpeg. When I am finished, I also need to be able to do the reverse operation to send it in the same format to another client for playback, which is currently working.

    


    Code that records and sends the audio (TS React) :

    


    React Record

    


    const startRecording = async function () {
    inputStream = await navigator.mediaDevices.getUserMedia({ audio: true });
   
    mediaRecorder.current = new MediaRecorder(inputStream, { mimeType: "audio/webm; codecs=opus" });

    mediaRecorder.current.ondataavailable = e => {
      console.log(e.data)
      if (e.data.size > 0) {
        socket.emit("recording", e.data);
        console.log("Audio data recorded. Transmitting to server via socketio...");
      }
    };

    mediaRecorder.current.start(1000);
  };



    


    Code that receives and tries to save the Buffer to a file (JS Node.js) :

    


    Server Receive

    


    socket.on("recording", (chunk) => {
    console_log("Audio chunk recieved. Transmitting to frontend...");
    socket.broadcast.emit('listening', chunk);

    fs.writeFileSync('out.webm', chunk.toString());
    if (counter > 3) {
      console.log("Trying ffmpeg...");

      ffmpegInstance
        .input('out.webm')
        .complexFilter([
          {
            filter: 'amix'
          }])
        .save('./Music/FFMPEGSTREAM.mp3');
    }

    counter++;
  });


    


    fluent-ffmpeg interface package is includued in the server code, but I have been using ffmpeg in the terminal (Pop OS) to debug. The goal is to save the file to a ram disk and use fluent ffmpeg to mix before sending to a different client for playback. Currently I am just trying to save it to disk and get ffmpeg command line to work on it.

    


    Update :
Problem was that the chunk I was analyzing didn't have the header info. MediaRecorder encodes, then slices it up, not slices it up into your specified time slot and encodes. I have not found a good solution to this. Saving the file, without toString I believe, results in a playable webm when the header is properly included.