Recherche avancée

Médias (1)

Mot : - Tags -/musée

Autres articles (74)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (5575)

  • Dynamic volume mixing with FFMPeg

    1er juin 2021, par jvhang

    I am streaming audio using FFMPeg and need to mix two audio sources using FFMpeg and set the volume level dynamically. I.e. once the stream starts, I need to be able update the ratio of the two volumes.

    


    Currently, I have the volume mixing and streaming working using the CLI version of FFMPeg but the volume mix ratio is static.

    


    Is there a way to dynamically set the volume ratio using the CLI tool ? Perhaps something with an FFMpeg expression ?

    


    Or is using the API the only option ? If so, can anyone point me towards an example of dynamically mixing audio ? I haven't been able to find one.

    


    Edit : here are the params I currently pass to mix audio. Again, this works fine, so not including the log as there is no error to fix. The question is how can I adjust the ratio of the amix mix after the process has launched. Willing to use the API if need be.

    


                    "-f rawvideo" + // container
                " -vcodec rawvideo" + // codec
                " -s " + width + "x" + height + // input video size, must be correct
                " -pix_fmt rgba" + // pixel format
                " -framerate " + frameRate +
                " -i pipe:0" + // from stdin in via pipe
               " -f dshow" +
                " -i audio=\"Stereo Mix (Realtek(R) Audio)\"" + //
                " -f dshow" +
                " -i audio=\"Microphone Array (Xbox NUI Sensor)\"" + // 
                " -filter_complex \"amix\"" +  // mix the two inputs, can added ratio if needed
                " -c:v libx264" + // x264 software encoder
                " -g " + frameRate *2 +
                " -keyint_min " + frameRate +
                " -c:a aac" + // audio format
                " -b:v 6M -maxrate 2M -bufsize 1M" + // constrain bitrate per twitch
                " -f flv" +
                " " + address 
            );


    


  • lavu/video_enc_params : Avoid relying on an undefined C construct

    15 janvier 2023, par Martin Storsjö
    lavu/video_enc_params : Avoid relying on an undefined C construct
    

    The construct of using offsetof on a (potentially anonymous) struct
    defined within the offsetof expression, while supported by all
    current compilers, has been declared explicitly undefined by the
    C standards committee [1].

    Clang recently got a change to identify this as an issue [2] ;
    initially it was treated as a hard error, but it was soon after
    softened into a warning under the -Wgnu-offsetof-extensions option
    (not enabled automatically as part of -Wall though).

    Nevertheless - in this particular case, it's trivial to fix the
    code not to rely on the construct that the standards committee has
    explicitly called out as undefined.

    [1] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2350.htm
    [2] https://reviews.llvm.org/D133574

    Signed-off-by : Martin Storsjö <martin@martin.st>

    • [DH] libavutil/video_enc_params.c
  • Use ffmpeg to stream live content to azure media services

    9 juin 2016, par Dadicool

    I’ve been trying to stream content to azure media services using ffmpeg as it’s one of the options described here : http://azure.microsoft.com/blog/2014/09/18/azure-media-services-rtmp-support-and-live-encoders/

    My command is :

    ffmpeg -v verbose -i 300.mp4 -strict -2 -c:a aac -b:a 128k -ar 44100 -r 30 -g 60 -keyint_min 60 -b:v 400000 -c:v libx264 -preset medium -bufsize 400k -maxrate 400k -f flv rtmp://nessma-****.channel.mediaservices.windows.net:1935/live/584c99f5c47f424d9e83ac95364331e7

    I have made sure that the streaming endpoint has one active streaming unit, I also made sure that the channel is actually Ready and I even get it to start streaming (which makes a PublishURL available).

    When I execute the ffmpeg command to start streaming, I keep getting the following error :

    ffmpeg version 2.5.2 Copyright (c) 2000-2014 the FFmpeg developers
     built on Dec 30 2014 11:31:18 with llvm-gcc 4.2.1 (LLVM build 2336.11.00)
     configuration: --prefix=/Volumes/Ramdisk/sw --enable-gpl --enable-pthreads --enable-version3 --enable-libspeex --enable-libvpx --disable-decoder=libvpx --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libx264 --enable-avfilter --enable-libopencore_amrwb --enable-libopencore_amrnb --enable-filters --enable-libgsm --enable-libvidstab --enable-libx265 --arch=x86_64 --enable-runtime-cpudetect
     libavutil      54. 15.100 / 54. 15.100
     libavcodec     56. 13.100 / 56. 13.100
     libavformat    56. 15.102 / 56. 15.102
     libavdevice    56.  3.100 / 56.  3.100
     libavfilter     5.  2.103 /  5.  2.103
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  1.100 /  1.  1.100
     libpostproc    53.  3.100 / 53.  3.100
    Routing option strict to both codec and muxer layer
    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9a0a002c00] overread end of atom 'colr' by 1 bytes
    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9a0a002c00] stream 0, timescale not set
    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f9a0a002c00] max_analyze_duration 5000000 reached at 5003637 microseconds
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '300.mp4':
     Metadata:
       major_brand     : mp42
       minor_version   : 0
       compatible_brands: mp42isomavc1
       creation_time   : 2014-01-11 05:39:32
       genre           : Trailer
       artist          : Warner Bros.
       title           : 300: Rise of an Empire - Trailer 2
       encoder         : HandBrake 0.9.9 2013051800
       date            : 2014
     Duration: 00:02:33.24, start: 0.000000, bitrate: 7377 kb/s
       Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1920x1080 (1920x1088), 7219 kb/s, 23.98 fps, 23.98 tbr, 90k tbn, 47.95 tbc (default)
       Metadata:
         creation_time   : 2014-01-11 05:39:32
         encoder         : JVT/AVC Coding
       Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 157 kb/s (default)
       Metadata:
         creation_time   : 2014-01-11 05:39:32
       Stream #0:2: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 101x150 [SAR 72:72 DAR 101:150], 90k tbr, 90k tbn, 90k tbc
    rtmp://nessma-****.channel.mediaservices.windows.net:1935/live/584c99f5c47f424d9e83ac95364331e7: Input/output error

    The Azure blog post clearly states that this should be possible but I can’t find a working example anywhere.

    Environment :

    • MacOS Maverick
    • FFMPEG installed from official build
    • 300.mp4 : 1080p trailer of the latest 300 movie