Recherche avancée

Médias (1)

Mot : - Tags -/lev manovitch

Autres articles (101)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Configuration spécifique d’Apache

    4 février 2011, par

    Modules spécifiques
    Pour la configuration d’Apache, il est conseillé d’activer certains modules non spécifiques à MediaSPIP, mais permettant d’améliorer les performances : mod_deflate et mod_headers pour compresser automatiquement via Apache les pages. Cf ce tutoriel ; mode_expires pour gérer correctement l’expiration des hits. Cf ce tutoriel ;
    Il est également conseillé d’ajouter la prise en charge par apache du mime-type pour les fichiers WebM comme indiqué dans ce tutoriel.
    Création d’un (...)

Sur d’autres sites (9473)

  • Chrome extension Use FFMpeg

    28 décembre 2014, par Villie

    I am developing a chrome extension where I have to record the video stream from chromes desktopCapture API. I was initially using whammy.js but the video quality was poor for higher resolutions.
    Now I am planning to use ffmpeg to record the video stream by writing a nacl module.
    While going through the blogs I found out that chrome has included ffmpeg for playing out videos.

    So is there an api where I can use chrome’s ffpeg from nacl to record video of the video stream ?

    Tks

  • How should I add a transparent watermark.png over my RTMP h264 stream with ffmpeg ?

    16 juin 2013, par RoelandP

    I have a Raspberry Pi with the new camera module hooked up to (in this case) Bambuser. You can see the stream here, it's from a windmill in The Netherlands (camera position will be better within a few weeks).

    I succesfully have the stream running, but now I want to add an image (alpha transparent png) on top of the input-stream which is piped to ffmpeg to be streamed to Bambuser.

    I currently use the following command (user specific details wiped out) to succesfully stream the input from the Raspberry Camera module (it's great, HD & all, hardware rendering) to Bambuser, following the great tutorial by Slickstreamer :

    raspivid -t 999999999 -w 960 -h 540 -fps 25 -b 500000 -o - | ffmpeg  -i - -vcodec copy -an -metadata title="STREAM NAME" -f flv rtmp://USER_X.fme.bambuser.com/b-fme/USER_STREAM_KEY_X

    I followed the docs about ffmpeg and it seems to me I should use the '-vf'-command to apply the 'movies :' filter, like so :

    raspivid -t 999999999 -w 960 -h 540 -fps 25 -b 500000 -o - | ffmpeg  -i - -vf "movie='/home/USER/watermark.png' [logo]; [in][logo] overlay=main_w-overlay_w-10:10 [out]" -vcodec copy -an -metadata title="STREAM NAME" -f flv rtmp://USER_X.fme.bambuser.com/b-fme/USER_STREAM_KEY_X

    and various other -vf commands, like '-vf vflip' or '-vf mandelbrot'. But it doesn't seem to work, as the stream just shows the direct input from the Raspberry Camera.

    This is the output when started with the following -vf command :

    raspivid -t 999999999 -w 960 -h 540 -fps 25 -b 500000 -o - | ffmpeg -i - -vcodec copy -vf 'movie=0:png:/home/USER/watermark.png [watermark];[in] [watermark]overlay=0:0:1[out]' -an -metadata title="STREAM NAME" -f flv rtmp://USER_X.fme.bambuser.com/b-fme/USER_STREAM_KEY_X
    ffmpeg version N-54036-g6c4516d Copyright (c) 2000-2013 the FFmpeg developers
      built on Jun 15 2013 XX:XX with gcc 4.6 (Debian 4.6.3-14+rpi1)
      configuration : 
      libavutil      52. 35.101 / 52. 35.101
      libavcodec     55. 16.100 / 55. 16.100
      libavformat    55.  8.102 / 55.  8.102
      libavdevice    55.  2.100 / 55.  2.100
      libavfilter     3. 77.101 /  3. 77.101
      libswscale      2.  3.100 /  2.  3.100
      libswresample   0. 17.102 /  0. 17.102
    [h264 @ 0x1917cc0] max_analyze_duration 5000000 reached at 5000000 microseconds
    Input #0, h264, from 'pipe :' :
      Duration : N/A, bitrate : N/A
        Stream #0:0 : Video : h264 (High), yuv420p, 960x540, 25 fps, 25 tbr, 1200k tbn, 50 tbc
    Output #0, flv, to 'rtmp ://USER_X.fme.bambuser.com/b-fme/USER_STREAM_KEY_X' :
      Metadata :
        title : STREAM NAME
        encoder : Lavf55.8.102
        Stream #0:0 : Video : h264 ([7][0][0][0] / 0x0007), yuv420p, 960x540, q=2-31, 25 fps, 1k tbn, 1200k tbc
    Stream mapping :
      Stream #0:0 -> #0:0 (copy)
    frame= 2344 fps= 27 q=-1.0 size=    4827kB time=00:01:33.72 bitrate= 421.9kbits/s 
    

    As mentioned above, other -vf filters also don't seem to apply on the output stream on Bambuser, I think I fundamentally do something wrong here.

    1. Should I map the Raspivid-stream and map the image 'watermark.png' on top of that ? Would that be the solution ? Anyone experience with this ?

    Thank you very much for your thoughts in advance.

  • What would be the best strategy to take a RTP stream and send it to an RTMP server ?

    27 novembre 2014, par matiasinsaurralde

    I’m receiving a RTP/UDP from a hardware encoder, I have tried ffmpeg, so it takes this input and outputs the stream as FLV (it’s being sent to NGINX, nginx-rtmp-module). However I’m not able to play the stream smoothly once it’s received by nginx, some frames are broken or lost, etc.

    I think that my CPU is too slow for this format change (FLV) and/or ffmpeg is missing a lot of RTP packets. Any ideas ?