Recherche avancée

Médias (91)

Autres articles (43)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (4947)

  • How to setup a virtual mic and pipe audio to it from node.js

    28 octobre 2018, par Niellles

    Summary of what I am trying to achieve :

    I’m currently doing some work on a Discord bot. I’m trying to join a voice channel, which is the easy part, and then use the combined audio of the speakers in that voice channel as input for a webpage in a web browser. It doesn’t really matter which browser it is as long as it can be controlled with Selenium.


    What I’ve tried/looked into so far

    My bot so far is written up in Python using the discord.py API wrapper. Unfortunately listening to, as opposed to putting in, audio hasn’t been exactly implemented great − let alone documented − with discord.py. This made me decide to switch to node.js (i.e. discord.js) for the voice channel stuff of my bot.

    After switching to discord.js it was pretty easy to determine who’s talking and create an audio stream (PCM stream) for that user. For the next part I though I’d just pipe the audio stream to a virtual microphone and select that as the audio input on the browser. You can even use FFMPEG from within node.js 1, to get something that looks like this :

    const Discord = require("discord.js");
    const client = new Discord.Client();

    client.on('ready', () => {
     voiceChannel = client.channels.get('SOME_CHANNEL_ID');
     voiceChannel.join()
       .then(conn => {
         console.log('Connected')

         const receiver = conn.createReceiver();

         conn.on('speaking', (user, speaking) => {
           if (speaking) {
             const audioStream = receiver.createPCMStream(user);

             ffmpeg(stream)
                 .inputFormat('s32le')
                 .audioFrequency(16000)
                 .audioChannels(1)
                 .audioCodec('pcm_s16le')
                 .format('s16le')
                 .pipe(someVirtualMic);          
           }
         });
       })
       .catch(console.log);
     });

    client.login('SOME_TOKEN');

    This last part, creating and streaming to a virtual microphone, has proven to be rather complicated. I’ve read a ton of SO posts and documentation on both The Advanced Linux Sound Architecture (ALSA) and the JACK Audio Connection Kit, but I simply can’t figure out how to setup a virtual microphone that will show up as a mic in my browser, or how to pipe audio to it.

    Any help or pointers to a solution would be greatly appreciated !


    Addendum

    For the past couple of days I’ve kept on looking into to this issue. I’ve now learned about ALSA loopback devices and feel that the solution must be there.

    I’ve pretty much followed a post that talks about loopback devices and aims to achieve the following :

    Simply imagine that you have a physical link between one OUT and one
    IN of the same device.

    I’ve set up the devices as described in the post and now two new audio devices show up when selecting a microphone in Firefox. I’d expect one, but I that may be because I don’t entirely understand the loopback devices (yet).

    The loop back devices are created and I think that they’re linked (if I understood the aforementioned article correctly). Assuming that’s the case the only problem I have to tackle is streaming the audio via FFMPEG from within node.js.

    Audio devices

  • How do I set ffmpeg pipe output ?

    5 décembre 2019, par mr_blond

    I need to read ffmpeg output as pipe.
    There is a code example :

       public static void PipeTest()
       {
           Process proc = new Process();
           proc.StartInfo.FileName = Path.Combine(WorkingFolder, "ffmpeg");
           proc.StartInfo.Arguments = String.Format("$ ffmpeg -i input.mp3 pipe:1");
           proc.StartInfo.UseShellExecute = false;
           proc.StartInfo.RedirectStandardInput = true;
           proc.StartInfo.RedirectStandardOutput = true;
           proc.Start();

           FileStream baseStream = proc.StandardOutput.BaseStream as FileStream;
           byte[] audioData;
           int lastRead = 0;

           using (MemoryStream ms = new MemoryStream())
           {
               byte[] buffer = new byte[5000];
               do
               {
                   lastRead = baseStream.Read(buffer, 0, buffer.Length);
                   ms.Write(buffer, 0, lastRead);
               } while (lastRead > 0);

               audioData = ms.ToArray();
           }

           using(FileStream s = new FileStream(Path.Combine(WorkingFolder, "pipe_output_01.mp3"), FileMode.Create))
           {
               s.Write(audioData, 0, audioData.Length);
           }
       }

    It’s log from ffmpeg, the first file is readed :

    Input #0, mp3, from ’norm.mp3’ :
    Metadata :
    encoder : Lavf58.17.103
    Duration : 00:01:36.22, start : 0.023021, bitrate : 128 kb/s
    Stream #0:0 : Audio : mp3, 48000 Hz, stereo, fltp, 128 kb/s
    Metadata :
    encoder : Lavc58.27

    Then pipe :

    [NULL @ 0x7fd58a001e00] Unable to find a suitable output format for ’$’
    $ : Invalid argument

    If I run "-i input.mp3 pipe:1", the log is :

    Unable to find a suitable output format for ’pipe:1’ pipe:1 : Invalid
    argument

    How do I set correct output ? And how should ffmpeg know what the output format is at all ?

  • using ffmpeg for silence detect with input pipe

    24 août 2018, par Moharrer

    I am trying to detect silence from audio file with ffmpeg in c#.
    i want to pipe input from c# memory stream and get silence duration like following command

    ffmpeg -hide_banner -i pipe:0 -af silencedetect=noise=-50dB:d=0.5 -f null - 

    but there is a problem, when input stream pump in pipe, ffmpeg waiting in p.WaitForExit() line.
    when i change p.WaitForExit() to p.WaitForExit(1000) and set force timeout the following result is displayed.

    [mp3 @ 00000163818da580] invalid concatenated file detected - using bitrate for durationInput #0, mp3, from ’pipe:0’ :  
    Metadata :    encoder : Lavf57.71.100  Duration : N/A, start : 0.023021, bitrate : 86 kb/s    
    Stream #0:0 : Audio : mp3, 48000 Hz, mono, fltp, 86 kb/sStream mapping :  Stream #0:0 -> #0:0 (mp3 (mp3float) -> pcm_s16le (native))Output #0, null, to ’pipe :’ :  
    Metadata :    encoder : Lavf58.17.101    Stream #0:0 : Audio : pcm_s16le, 48000 Hz, mono, s16, 768 kb/s   
     Metadata :      encoder : Lavc58.21.105 pcm_s16le
    

    [silencedetect @ 0000023df1786840] silence_start : 50.1098
    [silencedetect @ 0000023df1786840] silence_end : 51.5957 | silence_duration : 1.48588
    [silencedetect @ 0000023df1786840] silence_start : 51.5959
    [silencedetect @ 0000023df1786840] silence_end : 52.127 | silence_duration : 0.531062
    [silencedetect @ 0000023df1786840] silence_start : 52.8622
    [silencedetect @ 0000023df1786840] silence_end : 54.0096 | silence_duration : 1.14733
    [silencedetect @ 0000023df1786840] silence_start : 54.6804

    as you can see in result silence detection done but with error at the first.
    this mean input file pumped correctly in ffmpg but waiting.
    how can i solve problem without set time out for p.WaitForExit()

    private void Execute(string exePath, string parameters, Stream inputStream)
            
                byte[] Data = new byte[5000] ;
    

    var p = new Process() ;
    var sti = p.StartInfo ;
    sti.CreateNoWindow = true ;
    sti.UseShellExecute = false ;
    sti.FileName = exePath ;
    sti.Arguments = arg ;
    sti.LoadUserProfile = false ;
    sti.RedirectStandardInput = true ;
    sti.RedirectStandardOutput = true ;

    sti.RedirectStandardError = true ;

    p.ErrorDataReceived += P_ErrorDataReceived ;
    p.OutputDataReceived += P_OutputDataReceived ;

    p.Start() ;

    p.BeginOutputReadLine() ;
    p.BeginErrorReadLine() ;

    var spInput = new StreamPump(inputStream, p.StandardInput.BaseStream, 4064) ;
    spInput.Pump((pump, result) =>

    pump.Output.Flush() ;
    inputStream.Dispose() ;
    ) ;

    //unlimited waiting
    //p.WaitForExit() ;

    p.WaitForExit(1000) ;