Recherche avancée

Médias (91)

Autres articles (37)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

Sur d’autres sites (6006)

  • Ffmpeg send duration of video to client (using node-fluent-ffmpeg)

    26 mai 2013, par Vprnl

    I'm really new to the world of ffmpeg so please excuses me if this is a stupid queston.

    I'm using the module Node-fluent-ffmpeg to stream a movie and convert it from avi to webm with FFMPEG.

    So far so good (it plays the video), but I'm having trouble parsing the duration to the player. It also gives me an error even though I plays the video.

    my code is as followed :

    var stat = fs.statSync(movie);

    var start = 0;
    var end = 0;
    var range = req.header('Range');
    if (range != null) {
    start = parseInt(range.slice(range.indexOf('bytes=')+6,
     range.indexOf('-')));
    end = parseInt(range.slice(range.indexOf('-')+1,
     range.length));
    }
    if (isNaN(end) || end == 0) end = stat.size-1;
    if (start > end) return;

    var duration = (end / 1024) * 8 / 1024;

    res.writeHead(206, { // NOTE: a partial http response
       'Connection':'close',
       'Content-Type':'video/webm',
       'Content-Length':end - start,
       'Content-Range':'bytes '+start+'-'+end+'/'+stat.size,
       'Transfer-Encoding':'chunked'
    });

    var proc = new ffmpeg({ source: movie, nolog: true, priority: 1, timeout:15000})
       .toFormat('webm')
       .addOptions(['-probesize 900000', '-analyzeduration 0', '-minrate 1024k', '-maxrate 1024k', '-bufsize 1835k', '-t '+duration+' -ss'])
       .writeToStream(res, function(retcode, error){
       if (!error){
           console.log('file has been converted succesfully',retcode);
       }else{
           console.log('file conversion error',error);
       }
    });

    I set the header with a start and a end based on this article : http://delog.wordpress.com/2011/04/25/stream-webm-file-to-chrome-using-node-js/

    I calculate the length in seconds in the variable duration.

    The error FFmpeg is giving me is :

       file conversion error ffmpeg version N-52458-gaa96439 Copyright (c) 2000-2013 the FFmpeg developers
         built on Apr 24 2013 22:19:32 with gcc 4.8.0 (GCC)
         configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --e
       nable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetype --enable
       -libgsm --enable-libilbc --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --ena
       ble-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwola
       me --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enabl
       e-libxvid --enable-zlib
         libavutil      52. 27.101 / 52. 27.101
         libavcodec     55.  6.100 / 55.  6.100
         libavformat    55.  3.100 / 55.  3.100
         libavdevice    55.  0.100 / 55.  0.100
         libavfilter     3. 60.101 /  3. 60.101
         libswscale      2.  2.100 /  2.  2.100
         libswresample   0. 17.102 /  0. 17.102
         libpostproc    52.  3.100 / 52.  3.100
       Input #0, avi, from 'C:/temp/test.avi':
         Metadata:
           encoder         : Nandub v1.0rc2
         Duration: 00:01:09.78, start: 0.000000, bitrate: 1517 kb/s
           Stream #0:0: Video: msmpeg4v3 (DIV3 / 0x33564944), yuv420p, 640x352, 23.98 tbr, 23.98 tbn, 23.98 tbc
           Stream #0:1: Audio: mp3 (U[0][0][0] / 0x0055), 48000 Hz, stereo, s16p, 222 kb/s
       [libvpx @ 0036db20] v1.2.0
       Output #0, webm, to 'pipe:1':
         Metadata:
           encoder         : Lavf55.3.100
           Stream #0:0: Video: vp8, yuv420p, 640x352, q=-1--1, 200 kb/s, 1k tbn, 23.98 tbc
           Stream #0:1: Audio: vorbis, 48000 Hz, stereo, fltp
       Stream mapping:
         Stream #0:0 -> #0:0 (msmpeg4 -> libvpx)
         Stream #0:1 -> #0:1 (mp3 -> libvorbis)

    The client side player (which is VideoJs) says the file is infinite/NaN in length.

    I feel like I'm pretty close to a solution but my inexperience with the subject matter prohibits me from getting it to work. If I'm unclear in any way please let me know. (I have a tendency of explaining things fuzzy.)

    Thanks in advance !

    [EDIT]

    I removed the duration bit because it has nothing to do with the issue. I checked the response header of the client and saw :

    Accept-Ranges:bytes
    Connection:keep-alive
    Content-Length:13232127
    Content-Range:bytes 0-13232127/13232128
    Content-Type:video/webm

    Why can't the client figure out the duration even though it receives it in the header ?

  • Encoding AAC with ffmpeg (c++)

    26 novembre 2016, par Mockarutan

    I’m working on video encoding that will be used in a Unity plugin. I have made image encoding work, but now I’m at the audio. So trying only with the audio in to a mp4 file with AAC encoding. And I’m stuck. The resulting file does not contain anything. Also, from what I understand, AAC in ffmpeg only supports AV_SAMPLE_FMT_FLTP, that’s why I use it. Here’s my code :

    Setup :

    int initialize_encoding_audio(const char *filename)
    {
       int ret;
       AVCodecID aud_codec_id = AV_CODEC_ID_AAC;
       AVSampleFormat sample_fmt = AV_SAMPLE_FMT_FLTP;

       avcodec_register_all();
       av_register_all();

       aud_codec = avcodec_find_encoder(aud_codec_id);
       avcodec_register(aud_codec);

       if (!aud_codec)
           return COULD_NOT_FIND_AUD_CODEC;

       aud_codec_context = avcodec_alloc_context3(aud_codec);
       if (!aud_codec_context)
           return CONTEXT_CREATION_ERROR;

       aud_codec_context->bit_rate = 192000;
       aud_codec_context->sample_rate = select_sample_rate(aud_codec);
       aud_codec_context->sample_fmt = sample_fmt;
       aud_codec_context->channel_layout = AV_CH_LAYOUT_STEREO;
       aud_codec_context->channels = av_get_channel_layout_nb_channels(aud_codec_context->channel_layout);

       aud_codec_context->codec = aud_codec;
       aud_codec_context->codec_id = aud_codec_id;

       ret = avcodec_open2(aud_codec_context, aud_codec, NULL);

       if (ret < 0)
           return COULD_NOT_OPEN_AUD_CODEC;

       outctx = avformat_alloc_context();
       ret = avformat_alloc_output_context2(&outctx, NULL, "mp4", filename);

       outctx->audio_codec = aud_codec;
       outctx->audio_codec_id = aud_codec_id;

       audio_st = avformat_new_stream(outctx, aud_codec);

       audio_st->codecpar->bit_rate = aud_codec_context->bit_rate;
       audio_st->codecpar->sample_rate = aud_codec_context->sample_rate;
       audio_st->codecpar->channels = aud_codec_context->channels;
       audio_st->codecpar->channel_layout = aud_codec_context->channel_layout;
       audio_st->codecpar->codec_id = aud_codec_id;
       audio_st->codecpar->codec_type = AVMEDIA_TYPE_AUDIO;
       audio_st->codecpar->format = sample_fmt;
       audio_st->codecpar->frame_size = aud_codec_context->frame_size;
       audio_st->codecpar->block_align = aud_codec_context->block_align;
       audio_st->codecpar->initial_padding = aud_codec_context->initial_padding;

       outctx->streams = new AVStream*[1];
       outctx->streams[0] = audio_st;

       av_dump_format(outctx, 0, filename, 1);

       if (!(outctx->oformat->flags & AVFMT_NOFILE))
       {
           if (avio_open(&outctx->pb, filename, AVIO_FLAG_WRITE) < 0)
               return COULD_NOT_OPEN_FILE;
       }

       ret = avformat_write_header(outctx, NULL);

       aud_frame = av_frame_alloc();
       aud_frame->nb_samples = aud_codec_context->frame_size;
       aud_frame->format = aud_codec_context->sample_fmt;
       aud_frame->channel_layout = aud_codec_context->channel_layout;

       int buffer_size = av_samples_get_buffer_size(NULL, aud_codec_context->channels, aud_codec_context->frame_size,
           aud_codec_context->sample_fmt, 0);

       av_frame_get_buffer(aud_frame, buffer_size / aud_codec_context->channels);

       if (!aud_frame)
           return COULD_NOT_ALLOCATE_FRAME;

       aud_frame_counter = 0;

       return 0;
    }

    Encoding :

    int encode_audio_samples(uint8_t **aud_samples)
    {
       int ret;

       int buffer_size = av_samples_get_buffer_size(NULL, aud_codec_context->channels, aud_codec_context->frame_size,
           aud_codec_context->sample_fmt, 0);

       for (size_t i = 0; i < buffer_size / aud_codec_context->channels; i++)
       {
           aud_frame->data[0][i] = aud_samples[0][i];
           aud_frame->data[1][i] = aud_samples[1][i];
       }

       aud_frame->pts = aud_frame_counter++;

       ret = avcodec_send_frame(aud_codec_context, aud_frame);
       if (ret < 0)
           return ERROR_ENCODING_SAMPLES_SEND;

       AVPacket pkt;
       av_init_packet(&pkt);
       pkt.data = NULL;
       pkt.size = 0;

       fflush(stdout);

       while (true)
       {
           ret = avcodec_receive_packet(aud_codec_context, &pkt);
           if (!ret)
           {
               av_packet_rescale_ts(&pkt, aud_codec_context->time_base, audio_st->time_base);

               pkt.stream_index = audio_st->index;
               av_write_frame(outctx, &pkt);
               av_packet_unref(&pkt);
           }
           if (ret == AVERROR(EAGAIN))
               break;
           else if (ret < 0)
               return ERROR_ENCODING_SAMPLES_RECEIVE;
           else
               break;
       }

       return 0;
    }

    Finish encoding :

    int finish_audio_encoding()
    {
       AVPacket pkt;
       av_init_packet(&pkt);
       pkt.data = NULL;
       pkt.size = 0;

       fflush(stdout);

       int ret = avcodec_send_frame(aud_codec_context, NULL);
       if (ret < 0)
           return ERROR_ENCODING_FRAME_SEND;

       while (true)
       {
           ret = avcodec_receive_packet(aud_codec_context, &pkt);
           if (!ret)
           {
               if (pkt.pts != AV_NOPTS_VALUE)
                   pkt.pts = av_rescale_q(pkt.pts, aud_codec_context->time_base, audio_st->time_base);
               if (pkt.dts != AV_NOPTS_VALUE)
                   pkt.dts = av_rescale_q(pkt.dts, aud_codec_context->time_base, audio_st->time_base);

               av_write_frame(outctx, &pkt);
               av_packet_unref(&pkt);
           }
           if (ret == -AVERROR(AVERROR_EOF))
               break;
           else if (ret < 0)
               return ERROR_ENCODING_FRAME_RECEIVE;
       }

       av_write_trailer(outctx);
    }

    Main :

    void get_audio_frame(float_t *left_samples, float_t *right_samples, int frame_size, float* t, float* tincr, float* tincr2)
    {
       int j, i;
       float v;
       for (j = 0; j < frame_size; j++)
       {
           v = sin(*t);
           *left_samples = v;
           *right_samples = v;

           left_samples++;
           right_samples++;

           *t += *tincr;
           *tincr += *tincr2;
       }
    }

    int main()
    {
       int frame_rate = 30;  // this should be like 96000 / 1024 or somthing i guess?
       float t, tincr, tincr2;

       initialize_encoding_audio("audio.mp4");

       int sec = 50;

       float_t** aud_samples;
       int src_samples_linesize;
       int src_nb_samples = 1024;
       int src_channels = 2;

       int ret = av_samples_alloc_array_and_samples((uint8_t***)&aud_samples, &src_samples_linesize, src_channels,
           src_nb_samples, AV_SAMPLE_FMT_FLTP, 0);


       t = 0;
       tincr = 0;
       tincr2 = 0;

       for (size_t i = 0; i < frame_rate * sec; i++)
       {
           get_audio_frame(aud_samples[0], aud_samples[1], src_nb_samples, &t, &tincr, &tincr2);

           encode_audio_samples((uint8_t **)aud_samples);

       }

       finish_audio_encoding();
       //cleanup();

       return 0;
    }

    I guess the first thing that I would want to make sure I got right is the synthetic sound generation and how I transfer that to the AVFrame. Are my conversions correct ? But feel free to point out anything that might be wrong.

    Thanks in advance !

    Edit : the whole source : http://pastebin.com/jYtmkhek

    Edit2 : Added initialization of tincr & tincr2

  • How to reduce the latency of CMAF ?

    13 juin 2023, par dannyomni

    I implemented CMAF through a self-built nginx server with ffmpeg, but I encountered some technical bottlenecks. My latency always remains at 3 seconds and cannot be further reduced. Additionally, I'm unable to successfully implement chunked transfer.

    


    Briefly describe my environment, I use OBS to push the live stream to the server, then transcode it on the server, and finally push the content to users through CDN.

    


    Here is some of my code

    


    ffmpeg :

    


    sudo ffmpeg -i rtmp://127.0.0.1:1935/live/stream -loglevel 40 -c copy -sc_threshold 0 -g 60 -bf 0 -map 0 -f dash -strict experimental -use_timeline 1 -use_template 1 -seg_duration 1 -window_size 5 -adaptation_sets "id=0,streams=v id=1,streams=a" -streaming 1 -dash_segment_type mp4 -utc_timing_url "http://time.akamai.com/?iso" -movflags frag_keyframe+empty_moov+default_base_moof -ldash 1 -hls_playlist 1 -master_m3u8_publish_rate 1 -remove_at_exit 1 /var/www/html/live/manifest.mpd


    


    nignx config :

    


    server_name myserver.com;
    add_header Access-Control-Allow-Origin *;
    add_header Access-Control-Allow-Methods 'GET, POST, OPTIONS';
    add_header Access-Control-Allow-Headers 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
    add_header Access-Control-Expose-Headers 'Content-Length,Content-Range';
    root /var/www/html;
    index index.html index.nginx-debian.html;
        location / {
            chunked_transfer_encoding on;
        }


    


    html player

    


    &#xA;&#xA;&#xA;    &#xA;    &#xA;    &#xA;    <code class="echappe-js">&lt;script src=&quot;https://cdn.jsdelivr.net/npm/hls.js@latest&quot;&gt;&lt;/script&gt;&#xA;    &lt;script src=&quot;https://cdn.dashjs.org/latest/dash.all.min.js&quot;&gt;&lt;/script&gt;&#xA;    &#xA;&#xA;&#xA;    &#xA;&#xA;    &lt;script&gt;&amp;#xA;        const video = document.getElementById(&amp;#x27;video&amp;#x27;);&amp;#xA;        const hlsSrc = &amp;#x27;/live/master.m3u8&amp;#x27;; // Replace with your HLS stream URL&amp;#xA;        const dashSrc = &amp;#x27;/live/stream.mpd&amp;#x27;; // Replace with your DASH stream URL&amp;#xA;&amp;#xA;        function isHlsSupported() {&amp;#xA;            return Hls.isSupported() || video.canPlayType(&amp;#x27;application/vnd.apple.mpegurl&amp;#x27;);&amp;#xA;        }&amp;#xA;&amp;#xA;        function isDashSupported() {&amp;#xA;            return !!window.MediaSource &amp;amp;&amp;amp; !!MediaSource.isTypeSupported(&amp;#x27;video/mp4; codecs=&quot;avc1.4d401e,mp4a.40.2&quot;&amp;#x27;);&amp;#xA;        }&amp;#xA;&amp;#xA;        if (isHlsSupported()) {&amp;#xA;            // Use HLS for playback&amp;#xA;            const hls = new Hls({&amp;#xA;                lowLatencyMode: true,// Enable low-latency mode&amp;#xA;                liveSyncDurationCount: 1, // Number of segments used to sync live stream&amp;#xA;                liveMaxLatencyDurationCount: 2,// Number of segments used to calculate the latency&amp;#xA;                maxBufferLength: 2,// Max buffer length in seconds&amp;#xA;                maxBufferSize: 1000 * 1000 * 100,// Max buffer size in bytes&amp;#xA;                liveBackBufferLength: 0// Max back buffer length in seconds (0 means back buffer disabled)&amp;#xA;            });&amp;#xA;            hls.loadSource(hlsSrc);&amp;#xA;            hls.attachMedia(video);&amp;#xA;            hls.on(Hls.Events.MANIFEST_PARSED, () =&gt; {&amp;#xA;                video.play();&amp;#xA;            });&amp;#xA;        } else if (isDashSupported()) {&amp;#xA;            // Use DASH for playback&amp;#xA;            const player = dashjs.MediaPlayer().create();&amp;#xA;            player.initialize(video, dashSrc, true);&amp;#xA;            player.updateSettings({&amp;#xA;                streaming: {&amp;#xA;                    lowLatencyEnabled: true, // Enable low-latency mode&amp;#xA;                    liveDelay: 1, // Set live delay in seconds, equal to 3 times the segment duration&amp;#xA;                    liveCatchUpPlaybackRate: 1.2, // Playback rate for catching up when behind the live edge&amp;#xA;                    liveCatchUpMinDrift: 0.5, // Minimum drift from live edge before initiating catch-up (in seconds)&amp;#xA;                    bufferTimeAtTopQuality: 3, // Maximum buffer length in seconds&amp;#xA;                    bufferToKeep: 0, // Duration of the back buffer in seconds (disable back buffer)&amp;#xA;                }&amp;#xA;            });&amp;#xA;        } else {&amp;#xA;            console.error(&amp;#x27;Neither HLS nor DASH playback is supported in this browser.&amp;#x27;);&amp;#xA;        }&amp;#xA;    &lt;/script&gt;&#xA;&#xA;&#xA;

    &#xA;

    I hope to reduce the latency to 1 second.

    &#xA;