Recherche avancée

Médias (1)

Mot : - Tags -/ogg

Autres articles (98)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

Sur d’autres sites (6192)

  • Use deck.js as a remote presentation tool

    8 janvier 2014, par silvia

    deck.js is one of the new HTML5-based presentation tools. It’s simple to use, in particular for your basic, every-day presentation needs. You can also create more complex slides with animations etc. if you know your HTML and CSS.

    Yesterday at linux.conf.au (LCA), I gave a presentation using deck.js. But I didn’t give it from the lectern in the room in Perth where LCA is being held – instead I gave it from the comfort of my home office at the other end of the country.

    I used my laptop with in-built webcam and my Chrome browser to give this presentation. Beforehand, I had uploaded the presentation to a Web server and shared the link with the organiser of my speaker track, who was on site in Perth and had set up his laptop in the same fashion as myself. His screen was projecting the Chrome tab in which my slides were loaded and he had hooked up the audio output of his laptop to the room speaker system. His camera was pointed at the audience so I could see their reaction.

    I loaded a slide master URL :
    http://html5videoguide.net/presentations/lca_2014_webrtc/?master
    and the room loaded the URL without query string :
    http://html5videoguide.net/presentations/lca_2014_webrtc/.

    Then I gave my talk exactly as I would if I was in the same room. Yes, it felt exactly as though I was there, including nervousness and audience feedback.

    How did we do that ? WebRTC (Web Real-time Communication) to the rescue, of course !

    We used one of the modules of the rtc.io project called rtc-glue to add the video conferencing functionality and the slide navigation to deck.js. It was actually really really simple !

    Here are the few things we added to deck.js to make it work :

    • Code added to index.html to make the video connection work :
      <meta name="rtc-signalhost" content="http://rtc.io/switchboard/">
      <meta name="rtc-room" content="lca2014">
      ...
      <video id="localV" rtc-capture="camera" muted></video>
      <video id="peerV" rtc-peer rtc-stream="localV"></video>
      ...
      <script src="glue.js"></script>
      <script>
      glue.config.iceServers = [{ url: 'stun:stun.l.google.com:19302' }];
      </script>

      The iceServers config is required to punch through firewalls – you may also need a TURN server. Note that you need a signalling server – in our case we used http://rtc.io/switchboard/, which runs the code from rtc-switchboard.

    • Added glue.js library to deck.js :

      Downloaded from https://raw.github.com/rtc-io/rtc-glue/master/dist/glue.js into the source directory of deck.js.

    • Code added to index.html to synchronize slide navigation :
      glue.events.once('connected', function(signaller) {
       if (location.search.slice(1) !== '') {
         $(document).bind('deck.change', function(evt, from, to) {
           signaller.send('/slide', {
             idx: to,
             sender: signaller.id
           });
         });
       }
       signaller.on('slide', function(data) {
         console.log('received notification to change to slide: ', data.idx);
         $.deck('go', data.idx);
       });
      });

      This simply registers a callback on the slide master end to send a slide position message to the room end, and a callback on the room end that initiates the slide navigation.

    And that’s it !

    You can find my slide deck on GitHub.

    Feel free to write your own slides in this manner – I would love to have more users of this approach. It should also be fairly simple to extend this to share pointer positions, so you can actually use the mouse pointer to point to things on your slides remotely. Would love to hear your experiences !

    Note that the slides are actually a talk about the rtc.io project, so if you want to find out more about these modules and what other things you can do, read the slide deck or watch the talk when it has been published by LCA.

    Many thanks to Damon Oehlman for his help in getting this working.

    BTW : somebody should really fix that print style sheet for deck.js – I’m only ever getting the one slide that is currently showing.

  • ffmpeg-normalize exits with OSError : [WinError 193] %1 is not a valid Win32 application

    1er septembre 2021, par Moldy1

    I have a large number of files I want to normalize the volume with a simple method using a basic loop.

    


    When trying to use ffmpeg-normalize the application exits with the error. I searched the error on-line but I can't find any issue similar to mine. I thought it may be a 'path' or file type association problem but they look ok.

    


    Can anyone give me an explanation for this error and a possible fix for it please ?

    


    D:\Test>ffmpeg-normalize.exe in.mkv -o out.mkv&#xA;Traceback (most recent call last):&#xA;  File "c:\users\les\appdata\local\programs\python\python39\lib\runpy.py", line 197, in _run_module_as_main&#xA;    return _run_code(code, main_globals, None,&#xA;  File "c:\users\les\appdata\local\programs\python\python39\lib\runpy.py", line 87, in _run_code&#xA;    exec(code, run_globals)&#xA;  File "C:\Users\Les\AppData\Local\Programs\Python\Python39\Scripts\ffmpeg-normalize.exe\__main__.py", line 7, in <module>&#xA;  File "c:\users\les\appdata\local\programs\python\python39\lib\site-packages\ffmpeg_normalize\__main__.py", line 409, in main&#xA;    ffmpeg_normalize = FFmpegNormalize(&#xA;  File "c:\users\les\appdata\local\programs\python\python39\lib\site-packages\ffmpeg_normalize\_ffmpeg_normalize.py", line 68, in __init__&#xA;    self.has_loudnorm_capabilities = ffmpeg_has_loudnorm()&#xA;  File "c:\users\les\appdata\local\programs\python\python39\lib\site-packages\ffmpeg_normalize\_cmd_utils.py", line 185, in ffmpeg_has_loudnorm&#xA;    cmd_runner.run_command()&#xA;  File "c:\users\les\appdata\local\programs\python\python39\lib\site-packages\ffmpeg_normalize\_cmd_utils.py", line 101, in run_command&#xA;    p = subprocess.Popen(&#xA;  File "c:\users\les\appdata\local\programs\python\python39\lib\subprocess.py", line 947, in __init__&#xA;    self._execute_child(args, executable, preexec_fn, close_fds,&#xA;  File "c:\users\les\appdata\local\programs\python\python39\lib\subprocess.py", line 1416, in _execute_child&#xA;    hp, ht, pid, tid = _winapi.CreateProcess(executable, args,&#xA;OSError: [WinError 193] %1 is not a valid Win32 application&#xA;&#xA;D:\Test>path&#xA;PATH=C:\Program Files (x86)\Common Files\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Program Files\Calibre2\;C:\Program Files (x86)\Calibre2\;C:\Users\Les\AppData\Local\Programs\Python\Python39\Scripts\;C:\Users\Les\AppData\Local\Programs\Python\Python39\;C:\Users\Les\AppData\Local\Microsoft\WindowsApps;D:\MABS\local64\bin-video;C:\Program Files (x86)\sox-14-4-2&#xA;&#xA;D:\Test>ftype python.file&#xA;python.file="C:\Users\Les\AppData\Local\Programs\Python\Python39\python.exe" "%1"&#xA;&#xA;D:\Test>python.exe&#xA;Python 3.9.1 (tags/v3.9.1:1e5d33e, Dec  7 2020, 17:08:21) [MSC v.1927 64 bit (AMD64)] on win32&#xA;&#xA;ffmpeg -version&#xA;ffmpeg version N-100483-g728b83a7c4-gd67c6c7f6f&#x2B;4 Copyright (c) 2000-2020 the FFmpeg developers&#xA;built with gcc 10.2.0 (Rev6, Built by MSYS2 project)&#xA;&#xA;&#xA;&#xA; D:\Test>ffmpeg-normalize --version&#xA;        ffmpeg-normalize v1.22.1&#xA;</module>

    &#xA;

  • Encoding of D3D11Texture2D to an rtsp stream using libav*

    1er décembre 2020, par uzer

    Firstly I'll declare that I am just beginning with whole libav* and no experience with DirectX, so please go easy on me.

    &#xA;

    I have managed to create a rtsp stream using libav* using video file as source. Now, I am trying to create an rtsp stream from an ID3D11Texture2D, which I am obtaining from GDI API using Bitblit method. Here's my approach for creating live rtsp stream :

    &#xA;

      &#xA;
    1. Set input context&#xA;
        &#xA;
      • AVFormatContext* ifmt_ctx = avformat_alloc_context() ;
      • &#xA;

      • avformat_open_input(&ifmt_ctx, _videoFileName, 0, 0) ;
      • &#xA;

      &#xA;

    2. &#xA;

    3. Set output context&#xA;
        &#xA;
      • avformat_alloc_output_context2(&ofmt_ctx, NULL, "rtsp", _rtspServerAdress) ; //RTSP
      • &#xA;

      • copy all the codec context and stream from input to output
      • &#xA;

      &#xA;

    4. &#xA;

    5. Start streaming&#xA;
        &#xA;
      • while av_read_frame(ifmt_ctx, &pkt) ; is valid av_interleaved_write_frame(ofmt_ctx, &pkt) ;
      • &#xA;

      • with some timestamp checks and conditions for livestreaming
      • &#xA;

      &#xA;

    6. &#xA;

    &#xA;

    Now I am finding it difficult to follow current libav* documentation (which is deprecated) and little tutorial content available online.

    &#xA;

    The most relevant article I found on working between directX and libav* is this article.&#xA;However it's actually doing the opposite of what I need to do. I am not sure how to go about creating input stream and context with DirectX texture ! How can I convert the texture into an AVFrame which can be encoded to an AVStream ?

    &#xA;

    Here's some rough outline of what I am expecting

    &#xA;

    ID3D11Texture2D* win_textureptr = WindowManager::Get().GetWindow(RtspStreaming::WindowId())->GetWindowTexture();&#xA;&#xA;    D3D11_TEXTURE2D_DESC* desc;&#xA;    win_textureptr->GetDesc(desc);&#xA;    int width = desc->Width;&#xA;    int height = desc->Height;&#xA;    //double audio_time=0.0;&#xA;    auto start_time = std::chrono::system_clock::now();&#xA;    std::chrono::duration<double> video_time;&#xA;    &#xA;    //DirectX BGRA to h264 YUV420p&#xA;    SwsContext* conversion_ctx = sws_getContext(&#xA;        width, height, AV_PIX_FMT_BGRA,&#xA;        width, height, AV_PIX_FMT_YUV420P,&#xA;        SWS_BICUBLIN | SWS_BITEXACT, nullptr, nullptr, nullptr);&#xA;    &#xA;    uint8_t* sw_data[AV_NUM_DATA_POINTERS];&#xA;    int sw_linesize[AV_NUM_DATA_POINTERS];&#xA;&#xA;    while (RtspStreaming::IsStreaming())&#xA;    {&#xA;&#xA;        //copy the texture&#xA;        //win_textureptr->GetPrivateData();&#xA;        &#xA;&#xA;        // convert BGRA to yuv420 pixel format&#xA;        /*&#xA;    frame = av_frame_alloc();&#xA;        //this obviously is incorrect... I would like to use d3d11 texture here instead of frame&#xA;        sws_scale(conversion_ctx, frame->data, frame->linesize, 0, frame->height,&#xA;            sw_data, sw_linesize);&#xA;&#xA;        frame->format = AV_PIX_FMT_YUV420P;&#xA;        frame->width = width;&#xA;        frame->height = height;*/&#xA;&#xA;&#xA;        //encode to the video stream&#xA;        &#xA;        /* Compute current audio and video time. */&#xA;        video_time = std::chrono::system_clock::now() - start_time;&#xA;&#xA;        //write frame and send&#xA;    av_interleaved_write_frame(ofmt_ctx, &amp;pkt);&#xA;&#xA;        av_frame_unref(frame);&#xA;    }&#xA;&#xA;    av_write_trailer(ofmt_ctx);&#xA;</double>

    &#xA;