Recherche avancée

Médias (91)

Autres articles (56)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

  • Qu’est ce qu’un éditorial

    21 juin 2013, par

    Ecrivez votre de point de vue dans un article. Celui-ci sera rangé dans une rubrique prévue à cet effet.
    Un éditorial est un article de type texte uniquement. Il a pour objectif de ranger les points de vue dans une rubrique dédiée. Un seul éditorial est placé à la une en page d’accueil. Pour consulter les précédents, consultez la rubrique dédiée.
    Vous pouvez personnaliser le formulaire de création d’un éditorial.
    Formulaire de création d’un éditorial Dans le cas d’un document de type éditorial, les (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

Sur d’autres sites (4416)

  • New Challenges

    1er janvier 2014, par silvia

    I finished up at Google last week and am now working at NICTA, an Australian ICT research institute.

    My work with Google was exciting and I learned a lot. I like to think that Google also got a lot out of me – I coded and contributed to some YouTube caption features, I worked on Chrome captions and video controls, and above all I worked on video accessibility for HTML at the W3C.

    I was one of the key authors of the W3C Media Accessibility Requirements document that we created in the Media Accessibility Task Force of the W3C HTML WG. I then went on to help make video accessibility a reality. We created WebVTT and the <track> element and applied it to captions, subtitles, chapters (navigation), video descriptions, and metadata. To satisfy the need for synchronisation of video with other media resources such as sign language video or audio descriptions, we got the MediaController object and the @mediagroup attribute.

    I must say it was a most rewarding time. I learned a lot about being productive at Google, collaborate successfully over the distance, about how the WebKit community works, and about the new way of writing W3C standard (which is more like pseudo-code). As one consequence, I am now a co-editor of the W3C HTML spec and it seems I am also about to become the editor of the WebVTT spec.

    At NICTA my new focus of work is WebRTC. There is both a bit of research and a whole bunch of application development involved. I may even get to do some WebKit development, if we identify any issues with the current implementation. I started a week ago and am already amazed by the amount of work going on in the WebRTC space and the amazing number of open source projects playing around with it. Video conferencing is a new challenge and I look forward to it.

  • New Challenges

    1er janvier 2014, par silvia

    I finished up at Google last week and am now working at NICTA, an Australian ICT research institute.

    My work with Google was exciting and I learned a lot. I like to think that Google also got a lot out of me – I coded and contributed to some YouTube caption features, I worked on Chrome captions and video controls, and above all I worked on video accessibility for HTML at the W3C.

    I was one of the key authors of the W3C Media Accessibility Requirements document that we created in the Media Accessibility Task Force of the W3C HTML WG. I then went on to help make video accessibility a reality. We created WebVTT and the <track> element and applied it to captions, subtitles, chapters (navigation), video descriptions, and metadata. To satisfy the need for synchronisation of video with other media resources such as sign language video or audio descriptions, we got the MediaController object and the @mediagroup attribute.

    I must say it was a most rewarding time. I learned a lot about being productive at Google, collaborate successfully over the distance, about how the WebKit community works, and about the new way of writing W3C standard (which is more like pseudo-code). As one consequence, I am now a co-editor of the W3C HTML spec and it seems I am also about to become the editor of the WebVTT spec.

    At NICTA my new focus of work is WebRTC. There is both a bit of research and a whole bunch of application development involved. I may even get to do some WebKit development, if we identify any issues with the current implementation. I started a week ago and am already amazed by the amount of work going on in the WebRTC space and the amazing number of open source projects playing around with it. Video conferencing is a new challenge and I look forward to it.

  • How to read raw audio data using FFmpeg ?

    6 juin 2020, par Yousef Alaqra

    I'm trying to use this command to get the audio stream using UDP :

    &#xA;&#xA;

    ffmpeg -i udp://192.168.1.1:6980 -acodec copy&#xA;

    &#xA;&#xA;

    I got an error when I execute it, which says :

    &#xA;&#xA;

    [udp @ 00000157a76b9a40] bind failed: Error number -10048 occurred&#xA;udp://192.168.1.1:6980: I/O error&#xA;

    &#xA;&#xA;

    What's the meaning of this error ?

    &#xA;&#xA;

    Update :

    &#xA;&#xA;

    I was able to read raw audio data using FFmpeg and output into a wave file, using the following command :

    &#xA;&#xA;

    ffmpeg -f u16be -ar 44100 -ac 2 -i &#x27;udp://127.0.0.1:1223&#x27; output.wav&#xA;

    &#xA;&#xA;

    The problem now, Sine there is surrounding metadata in the network packets being received, it needs to be stripped out or it will result in noise.

    &#xA;&#xA;

    In C# I used Skip() to trim the first 28 bytes of the received packet, how would I achieve this using FFmpeg ?

    &#xA;&#xA;

    Update :

    &#xA;&#xA;

    I was able to read the raw bytes from UDP packets using by executing child process in node js :

    &#xA;&#xA;

    var http = require("http");&#xA;var port = 8888;&#xA;var host = "localhost";&#xA;var children = require("child_process");&#xA;&#xA;http&#xA;  .createServer(function (req, res) {&#xA;    //ffmpeg -f s16le -ar 48000 -ac 2 -i &#x27;udp://192.168.1.230:65535&#x27; -b:a 128k -f webm -&#xA;    var ffm = children.spawn(&#xA;      "ffmpeg",&#xA;      "-f s16le -ar 48000 -ac 2 -i udp://192.168.1.230:65535 -b:a 128k -f webm -".split(&#xA;        " "&#xA;      )&#xA;    );&#xA;&#xA;    res.writeHead(200, { "Content-Type": "audio/webm" });&#xA;    ffm.stdout.on("data", (data) => {&#xA;      console.log(data);&#xA;      res.write(data);&#xA;    });&#xA;  })&#xA;  .listen(port, host);&#xA;&#xA;console.log("Server running at http://" &#x2B; host &#x2B; ":" &#x2B; port &#x2B; "/");&#xA;

    &#xA;&#xA;

    As you can see in the code sample above, I'm trying to pipe the output of the child process into the response, so I would be able to hear the audio in the browser.

    &#xA;&#xA;

    I'm receiving the data, after executing the child process, but the browser unable to play audio for some reason that I need to figure it out.

    &#xA;&#xA;

    Do you have an idea of what am I missing ?

    &#xA;