Recherche avancée

Médias (1)

Mot : - Tags -/stallman

Autres articles (45)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Le plugin : Podcasts.

    14 juillet 2010, par

    Le problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
    Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
    Types de fichiers supportés dans les flux
    Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)

  • Les images

    15 mai 2013

Sur d’autres sites (4640)

  • WebM live streaming via DASH

    23 février 2017, par ewack

    I am following the instructions here to try to make WebM live streaming via DASH. My input is from an Axis camera and it is streaming as h264 encoding. I am using node to spin up the ffmpeg processes. I am able to create the .hdr file and the .chk files. The .mpd file is even created but it’s empty and I get an error saying :

    Could not write header for output file #0 (incorrect codec parameters ?): Operation not permittedStream mapping: Stream #0:0 -> #0:0 (copy)

    Here’s all of my code :

    var express = require('express');
    spawn = require('child_process').spawn;

    var app = express();

    app.use(express.static(__dirname + '/public'));

    app.listen(8080);
    console.log("Running on Port 8080");

    var ffmpeg1 = spawn('ffmpeg', [
       '-y',
       //video
       '-i', 'rtsp://admin:password@192.168.1.54:554/axis-media/media.amp?videocodec=h264&resolution=1280x720',

       '-map', '0:0',
       '-pix_fmt', 'yuv420p',
       '-color_range', '2',
       '-c:v', 'libvpx-vp9',

       '-s', '1280x720',
       '-keyint_min', '25',
       '-g', '25',

       // //VP9_LIVE_PARAMS
       '-speed', '6',
       '-tile-columns', '4',
       '-frame-parallel', '1',
       '-threads', '8',
       '-static-thresh', '0',
       '-max-intra-rate', '300',
       '-deadline', 'realtime',
       '-lag-in-frames', '0',
       '-error-resilient', '1',

       '-f', 'webm_chunk',
       '-header', 'public/glass_360.hdr',
       '-chunk_start_index', '1',
       'public/glass_360_%d.chk',
    ]);


    setTimeout(()=> {
     var ffmpeg2 = spawn('ffmpeg', [
       '-y',
       '-f', 'webm_dash_manifest',
       '-live', '1',
       '-i', 'public/glass_360.hdr',
       '-c', 'copy',
       '-map', '0',
       '-r', '25',
       '-framerate', '25',

       '-f', 'webm_dash_manifest',
       '-live', '1',

       '-adaptation_sets', '"id=0,streams=0"',
       '-chunk_start_index', '1',
       '-chunk_duration_ms', '2000',
       '-time_shift_buffer_depth', '7200',
       '-minimum_update_period', '7200',

       'public/glass_live_manifest.mpd'
     ]);
     ffmpeg2.stdout.on('data',
         function (data) {
             console.log('ff2std: ' + data);
         }
     );

     ffmpeg2.stderr.on('data',
         function (data) {
             console.log('ff2err: ' + data);
         }
     );
    }, 5000);

    ffmpeg1.stdout.on('data',
       function (data) {
           console.log('ff1std: ' + data);
       }
    );

    ffmpeg1.stderr.on('data',
       function (data) {
           console.log('ff1err: ' + data);
       }
    );

    Here is all of my output :

    Running on Port 8080
    ff1err: ffmpeg version 3.2.4 Copyright (c) 2000-2017 the FFmpeg developers
     built with Apple LLVM version 6.0 (clang-600.0.57) (based on LLVM 3.5svn)
     configuration: --prefix=/usr/local/Cellar/ffmpeg/3.2.4 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-frei0r --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-opencl --disable-lzma --enable-libopenjpeg --disable-decoder=jpeg2000 --extra-cflags=-I/usr/local/Cellar/openjpeg/2.1.2/include/openjpeg-2.1 --enable-nonfree --enable-vda

    ff1err:   libavutil      55. 34.101 / 55. 34.101
     libavcodec     57. 64.101 / 57. 64.101
     libavformat    57. 56.101 / 57. 56.101
     libavdevice    57.  1.100 / 57.  1.100
     libavfilter     6. 65.100 /  6. 65.100
     libavresample   3.  1.  0 /  3.  1.  0
     libswscale      4.  2.100 /  4.  2.100
     libswresample   2.  3.100 /  2.  3.100
     libpostproc    54.  1.100 / 54.  1.100

    ff1err: Input #0, rtsp, from 'rtsp://admin:password@192.168.1.54:554/axis-media/media.amp?videocodec=h264&resolution=1280x720':
     Metadata:
       title           : Session streamed with GStreamer
       comment         : rtsp-server
     Duration: N/A, start: 0.033344
    ff1err: , bitrate: N/A
       Stream #0:0: Video: h264 (Main), yuvj420p(pc, bt709, progressive), 1280x720 [SAR 1:1 DAR 16:9], 30 fps, 30 tbr, 90k tbn, 180k tbc

    ff1err: [swscaler @ 0x7f8df281bc00] deprecated pixel format used, make sure you did set range correctly

    ff1err: [libvpx-vp9 @ 0x7f8df2800600] v1.6.1

    ff1err: Output #0, webm_chunk, to 'public/glass_360_%d.chk':
     Metadata:
       title           : Session streamed with GStreamer
       comment         : rtsp-server
       encoder         : Lavf57.56.101

    ff1err:     Stream #0:0: Video: vp9 (libvpx-vp9), yuv420p(pc), 1280x720 [SAR 1:1 DAR 16:9], q=-1--1, 200 kb/s, 25 fps, 1k tbn, 25 tbc
       Metadata:
         encoder         : Lavc57.64.101 libvpx-vp9
       Side data:
         cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
    Stream mapping:
     Stream #0:0 -> #0:0 (h264 (native) -> vp9 (libvpx-vp9))
    Press [q] to stop, [?] for help

    ff1err: frame=   10 fps=0.0 q=0.0 size=N/A time=00:00:00.36 bitrate=N/A speed=0.71x    
    ff1err: frame=   25 fps= 25 q=0.0 size=N/A time=00:00:00.96 bitrate=N/A speed=0.946x    
    ff1err: frame=   40 fps= 26 q=0.0 size=N/A time=00:00:01.56 bitrate=N/A speed=1.03x    
    ff1err: frame=   55 fps= 27 q=0.0 size=N/A time=00:00:02.16 bitrate=N/A speed=1.07x    
    ff1err: frame=   70 fps= 28 q=0.0 size=N/A time=00:00:02.76 bitrate=N/A speed=1.09x    
    ff1err: frame=   85 fps= 28 q=0.0 size=N/A time=00:00:03.36 bitrate=N/A speed=1.11x    
    ff2err: ffmpeg version 3.2.4 Copyright (c) 2000-2017 the FFmpeg developers
     built with Apple LLVM version 6.0 (clang-600.0.57) (based on LLVM 3.5svn)
     configuration: --prefix=/usr/local/Cellar/ffmpeg/3.2.4 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-frei0r --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-opencl --disable-lzma --enable-libopenjpeg --disable-decoder=jpeg2000 --extra-cflags=-I/usr/local/Cellar/openjpeg/2.1.2/include/openjpeg-2.1 --enable-nonfree --enable-vda

    ff2err:   libavutil      55. 34.101 / 55. 34.101
     libavcodec     57. 64.101 / 57. 64.101
     libavformat    57. 56.101 / 57. 56.101
     libavdevice    57.  1.100 / 57.  1.100
     libavfilter     6. 65.100 /  6. 65.100
     libavresample   3.  1.  0 /  3.  1.  0
     libswscale      4.  2.100 /  4.  2.100
     libswresample   2.  3.100 /  2.  3.100
     libpostproc    54.  1.100 / 54.  1.100

    ff2err: [webm_dash_manifest @ 0x7fbc5b80b400] Could not find codec parameters for stream 0 (Video: vp9, none, 1280x720): unspecified pixel format
    Consider increasing the value for the 'analyzeduration' and 'probesize' options

    ff2err: Input #0, webm_dash_manifest, from 'public/glass_360.hdr':
     Metadata:
       title           : Session streamed with GStreamer
       encoder         : Lavf57.56.101
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: vp9, none, 1280x720
    ff2err: , SAR 1:1 DAR 16:9, 25 fps, 25 tbr, 1k tbn, 1k tbc (default)
       Metadata:
         webm_dash_manifest_file_name: glass_360.hdr
         webm_dash_manifest_track_number: 1

    ff2err: Could not write header for output file #0 (incorrect codec parameters ?): Operation not permittedStream mapping:
     Stream #0:0 -> #0:0 (copy)

    ff2err:     Last message repeated 1 times

    ff1err: frame=  101 fps= 29 q=0.0 size=N/A time=00:00:04.00 bitrate=N/A speed=1.13x    
    ff1err: frame=  116 fps= 29 q=0.0 size=N/A time=00:00:04.60 bitrate=N/A speed=1.14x    
    ff1err: frame=  131 fps= 29 q=0.0 size=N/A time=00:00:05.20 bitrate=N/A speed=1.15x    
    ff1err: frame=  146 fps= 29 q=0.0 size=N/A time=00:00:05.80 bitrate=N/A speed=1.15x    
    ff1err: frame=  161 fps= 29 q=0.0 size=N/A time=00:00:06.40 bitrate=N/A speed=1.15x    
    ff1err: frame=  177 fps= 29 q=0.0 size=N/A time=00:00:07.04 bitrate=N/A speed=1.16x    
    ff1err: frame=  192 fps= 29 q=0.0 size=N/A time=00:00:07.64 bitrate=N/A speed=1.16x    
    ff1err: frame=  207 fps= 29 q=0.0 size=N/A time=00:00:08.24 bitrate=N/A speed=1.16x    
    ff1err: frame=  222 fps= 29 q=0.0 size=N/A time=00:00:08.84 bitrate=N/A speed=1.17x    
    ff1err: frame=  237 fps= 29 q=0.0 size=N/A time=00:00:09.44 bitrate=N/A speed=1.17x    
    ff1err: frame=  252 fps= 29 q=0.0 size=N/A time=00:00:10.04 bitrate=N/A speed=1.17x  

    Why is ffmpeg creating an empty .mpd file ?

  • RTP packets detected as UDP

    28 février 2017, par user3172852

    Here is what I am trying to do :

    WebRTC endpoint > RTP Endpoint > ffmpeg > RTMP server.

    This is what my SDP file looks like.

    var cm_offer = "v=0\n" +
                 "o=- 3641290734 3641290734 IN IP4 127.0.0.1\n" +
                 "s=nginx\n" +
                 "c=IN IP4 127.0.0.1\n" +
                 "t=0 0\n" +
                 "m=audio 60820 RTP/AVP 0\n" +
                 "a=rtpmap:0 PCMU/8000\n" +
                 "a=recvonly\n" +
                 "m=video 59618 RTP/AVP 101\n" +
                 "a=rtpmap:101 H264/90000\n" +
                 "a=recvonly\n";

    What’s happening is that wireshark can detect the incoming packets at port 59618, but not as RTP packets but UDP packets. I am trying to capture the packets using ffmpeg with the following command :

    ubuntu@ip-132-31-40-100:~$ ffmpeg -i udp://127.0.0.1:59618 -vcodec copy stream.mp4
    ffmpeg version git-2017-01-22-f1214ad Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.3)
     configuration: --extra-libs=-ldl --prefix=/opt/ffmpeg --mandir=/usr/share/man --enable-avresample --disable-debug --enable-nonfree --enable-gpl --enable-version3 --enable-libopencore-amrnb --enable-libopencore-amrwb --disable-decoder=amrnb --disable-decoder=amrwb --enable-libpulse --enable-libfreetype --enable-gnutls --enable-libx264 --enable-libx265 --enable-libfdk-aac --enable-libvorbis --enable-libmp3lame --enable-libopus --enable-libvpx --enable-libspeex --enable-libass --enable-avisynth --enable-libsoxr --enable-libxvid --enable-libvidstab --enable-libwavpack --enable-nvenc
     libavutil      55. 44.100 / 55. 44.100
     libavcodec     57. 75.100 / 57. 75.100
     libavformat    57. 63.100 / 57. 63.100
     libavdevice    57.  2.100 / 57.  2.100
     libavfilter     6. 69.100 /  6. 69.100
     libavresample   3.  2.  0 /  3.  2.  0
     libswscale      4.  3.101 /  4.  3.101
     libswresample   2.  4.100 /  2.  4.100
     libpostproc    54.  2.100 / 54.  2.100

    All I get is a blinking cursor and The stream.mp4 file is not written to disk after I exit (ctrl+c).

    So can you help me figure out :

    1. why wireshark cannot detect the packets as RTP (I suspect it has something to do with SDP)
    2. How to handle SDP answer when the RTP endpoint is pushing to ffmpeg which doesn’t send an answer back.

    Here is the entire code (hello world tutorial modified)

    /*
        * (C) Copyright 2014-2015 Kurento (http://kurento.org/)
        *
        * Licensed under the Apache License, Version 2.0 (the "License");
        * you may not use this file except in compliance with the License.
        * You may obtain a copy of the License at
        *
        *   http://www.apache.org/licenses/LICENSE-2.0
        *
        * Unless required by applicable law or agreed to in writing, software
        * distributed under the License is distributed on an "AS IS" BASIS,
        * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
        * See the License for the specific language governing permissions and
        * limitations under the License.
        */

       function getopts(args, opts)
       {
         var result = opts.default || {};
         args.replace(
             new RegExp("([^?=&]+)(=([^&]*))?", "g"),
             function($0, $1, $2, $3) { result[$1] = decodeURI($3); });

         return result;
       };

       var args = getopts(location.search,
       {
         default:
         {
           ws_uri: 'wss://' + location.hostname + ':8433/kurento',
           ice_servers: undefined
         }
       });

       function setIceCandidateCallbacks(webRtcPeer, webRtcEp, onerror)
       {
         webRtcPeer.on('icecandidate', function(candidate) {
           console.log("Local candidate:",candidate);

           candidate = kurentoClient.getComplexType('IceCandidate')(candidate);

           webRtcEp.addIceCandidate(candidate, onerror)
         });

         webRtcEp.on('OnIceCandidate', function(event) {
           var candidate = event.candidate;

           console.log("Remote candidate:",candidate);

           webRtcPeer.addIceCandidate(candidate, onerror);
         });
       }


       function setIceCandidateCallbacks2(webRtcPeer, rtpEp, onerror)
       {
         webRtcPeer.on('icecandidate', function(candidate) {
           console.log("Localr candidate:",candidate);

           candidate = kurentoClient.getComplexType('IceCandidate')(candidate);

           rtpEp.addIceCandidate(candidate, onerror)
         });
       }


       window.addEventListener('load', function()
       {
         console = new Console();

         var webRtcPeer;
         var pipeline;
         var webRtcEpt;

         var videoInput = document.getElementById('videoInput');
         var videoOutput = document.getElementById('videoOutput');

         var startButton = document.getElementById("start");
         var stopButton = document.getElementById("stop");

         startButton.addEventListener("click", function()
         {
           showSpinner(videoInput, videoOutput);

           var options = {
             localVideo: videoInput,
             remoteVideo: videoOutput
           };


           if (args.ice_servers) {
            console.log("Use ICE servers: " + args.ice_servers);
            options.configuration = {
              iceServers : JSON.parse(args.ice_servers)
            };
           } else {
            console.log("Use freeice")
           }

           webRtcPeer = kurentoUtils.WebRtcPeer.WebRtcPeerSendrecv(options, function(error)
           {
             if(error) return onError(error)

             this.generateOffer(onOffer)
           });

           function onOffer(error, sdpOffer)
           {
             if(error) return onError(error)

             kurentoClient(args.ws_uri, function(error, client)
             {
               if(error) return onError(error);

               client.create("MediaPipeline", function(error, _pipeline)
               {
                 if(error) return onError(error);

                 pipeline = _pipeline;

                 pipeline.create("WebRtcEndpoint", function(error, webRtc){
                   if(error) return onError(error);

                   webRtcEpt = webRtc;

                   setIceCandidateCallbacks(webRtcPeer, webRtc, onError)

                   webRtc.processOffer(sdpOffer, function(error, sdpAnswer){
                     if(error) return onError(error);

                     webRtcPeer.processAnswer(sdpAnswer, onError);
                   });
                   webRtc.gatherCandidates(onError);

                   webRtc.connect(webRtc, function(error){
                     if(error) return onError(error);

                     console.log("Loopback established");
                   });
                 });



               pipeline.create("RtpEndpoint", function(error, rtp){
                   if(error) return onError(error);

                   //setIceCandidateCallbacks2(webRtcPeer, rtp, onError)


                   var cm_offer = "v=0\n" +
                         "o=- 3641290734 3641290734 IN IP4 127.0.0.1\n" +
                         "s=nginx\n" +
                         "c=IN IP4 127.0.0.1\n" +
                         "t=0 0\n" +
                         "m=audio 60820 RTP/AVP 0\n" +
                         "a=rtpmap:0 PCMU/8000\n" +
                         "a=recvonly\n" +
                         "m=video 59618 RTP/AVP 101\n" +
                         "a=rtpmap:101 H264/90000\n" +
                         "a=recvonly\n";



                   rtp.processOffer(cm_offer, function(error, cm_sdpAnswer){
                     if(error) return onError(error);

                     //webRtcPeer.processAnswer(cm_sdpAnswer, onError);
                   });
                   //rtp.gatherCandidates(onError);

                   webRtcEpt.connect(rtp, function(error){
                     if(error) return onError(error);

                     console.log("RTP endpoint connected to webRTC");
                   });
                 });









               });
             });
           }
         });
         stopButton.addEventListener("click", stop);


         function stop() {
           if (webRtcPeer) {
             webRtcPeer.dispose();
             webRtcPeer = null;
           }

           if(pipeline){
             pipeline.release();
             pipeline = null;
           }

           hideSpinner(videoInput, videoOutput);
         }

         function onError(error) {
           if(error)
           {
             console.error(error);
             stop();
           }
         }
       })


       function showSpinner() {
         for (var i = 0; i < arguments.length; i++) {
           arguments[i].poster = 'img/transparent-1px.png';
           arguments[i].style.background = "center transparent url('img/spinner.gif') no-repeat";
         }
       }

       function hideSpinner() {
         for (var i = 0; i < arguments.length; i++) {
           arguments[i].src = '';
           arguments[i].poster = 'img/webrtc.png';
           arguments[i].style.background = '';
         }
       }

       /**
        * Lightbox utility (to display media pipeline image in a modal dialog)
        */
       $(document).delegate('*[data-toggle="lightbox"]', 'click', function(event) {
         event.preventDefault();
         $(this).ekkoLightbox();
       });
  • Synchronizing screencasting (ffmpeg) and capturing from the webcam (OpenCV)

    24 avril 2013, par lyuba

    As from my previous questions, I am trying to build a simple eye tracker. Decided to start from a Linux version (run Ubuntu).

    To complete this task one should organize screencasting and webcam capturing in such way that frames from both streams exactly match each other and there is the same number of frames in each of them totally.

    Screencasting fps fully depends on the camera's fps, so each time we get the image from the webcam we can potentially grab a screen frame and stay happy. However, all the tools for the fast screencasting, like ffmpeg, for example, return the .avi file as the result and require the fps already known to be started.

    From the other side, tools like Java+Robot or ImageMagick seem to require around 20ms to return the .jpg screenshot, which is pretty slow for the task. But they may be requested right after each time the webcam frame is grabbed and provide the needed synchronization.

    So the sub-questions are :

    1. Does the USD camera's frame rate vary during a single session ?
    2. Are there any tools which provide fast screencasting frame by frame ?
    3. Is there any way to make ffmpeg push a new frame to the .avi file only when program initiates this request ?

    For my task I may either use C++ or Java.

    I am, actually, an interface designer, not the driver programmer, and this task seems to be pretty low-level. I would be grateful for any suggestion and tip !