Recherche avancée

Médias (91)

Autres articles (47)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

  • Déploiements possibles

    31 janvier 2010, par

    Deux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
    L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
    Version mono serveur
    La version mono serveur consiste à n’utiliser qu’une (...)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

Sur d’autres sites (4217)

  • Is there a way to filter out I/B/P frames in an MPEG Video stream and access the macroblock information ?

    1er mars 2014, par Vixian

    I am trying make a program for video analysis of MPEG streams in C or C++.

    I was able to find out the frame types in a video file using

    ffprobe -show_frames -pretty File.mpg | grep 'pict_type' > pict_type.txt

    However based on the order of the frames, it seems that they are in "display" (IBBPBBP...) order not in "transmission" order (IPBBPBBPBB...) and it's not exactly ideal way of doing things as I can't assure the command didn't skip any frames or anything as it's outside of my program.

    I have tried OpenCV, but it appears the information I require is too low-level for OpenCV and I believe the solution lies in ffmpeg or libavcodec but the documentation is a nightmare past the CLI. Although I am open to other solutions !

    The information I require are :

    • The type of each frame (I, B, or P)
    • The total number of macroblocks inside a frame
    • The number of intra-coded macroblocks inside a P frame
    • The number of both forward and backward predicted macroblocks inside a B frame
    • The number of just backward predicted macroblocks inside a B frame
    • The number of just forward predircted macroblocks inside a B frame

    I would be very grateful for your help !

  • Not all portions of video play well after concatenation

    24 septembre 2018, par srgbnd

    Node.JS 8.11.4, fluent-ffmpeg 2.1.2

    I need to concatenate random portions of the same length of different videos in one video file. The concatenation proceeds without errors. But when I play the final concatenated file I see some portions playing well with sound, others have video "frozen" but sounds playing.

    What’s the problem ? I want all portions playing well in the final concatenated file.

    Concatenation config :

    trex@cave:/media/trex/safe1/Development/app$ head concat_config.txt
    file /media/trex/safe1/Development/app/videos/test/417912400.mp4
    inpoint 145
    outpoint 155
    file /media/trex/safe1/Development/app/videos/test/440386842.mp4
    inpoint 59
    outpoint 69
    file /media/trex/safe1/Development/app/videos/test/417912400.mp4
    inpoint 144
    outpoint 154
       ...

    In total, I have 16 portions of 2 videos. Duration of a portion is 10 sec. In the future the number of video files and portions will be much bigger.

    trex@cave:/media/trex/safe1/Development/app$ ls -lh videos/test/
    total 344M
    -rw-r--r-- 1 trex trex  90M set 23 12:19 417912400.mp4
    -rw-r--r-- 1 trex trex 254M set 23 12:19 440386842.mp4

    JavaScript code for the concatentaion :

    const fs = require('fs');
    const path = require('path');
    const _ = require('lodash');
    const ffmpegPath = require('@ffmpeg-installer/ffmpeg').path;
    const ffprobePath = require('@ffprobe-installer/ffprobe').path;
    const ffmpeg = require('fluent-ffmpeg');
    ffmpeg.setFfmpegPath(ffmpegPath);
    ffmpeg.setFfprobePath(ffprobePath);


    function getMetadata(absPathToFile) {
     return new Promise(function (resolve, reject) {
       ffmpeg.ffprobe(absPathToFile, function(err, metadata) {
         if (err) {
           reject('get video meta: ' + err.toString());
         }
         resolve(metadata);
       });
     });
    }

    async function getFormat(files) {
     const pArray = files.map(async f => {
       const meta = await getMetadata(f);
       meta.format.short_filename = meta.format.filename.split('/').pop();
       return meta.format;
     });
     return await Promise.all(pArray);
    }

    function getSliceValues(duration, max = 10) {
     max = duration < max ? duration * 0.5 : max; // sec
     const start = _.random(0, duration * 0.9);
     const end = start + max > duration ? duration : start + max;
     return `inpoint ${Math.floor(start)}\noutpoint ${Math.floor(end)}\n`;
    }

    function addPath(arr, aPath) {
     return arr.map(e => path.join(aPath, e));
    }

    function createConfig(meta) {
     return meta.map(video => `file ${video.filename}\n${getSliceValues(video.duration)}`).join('');
    }

    function duplicateMeta(meta) {
     for (let i = 0; i < 3; i++) {
       meta.push(...meta);
     }
     return _.shuffle(meta);
    }

    const videoFolder = path.join(__dirname, 'videos/test');
    const finalVideo = 'final_video.mp4';
    const configFile = 'concat_config.txt';

    // main
    (async () => {
     let videos = addPath(fs.readdirSync(videoFolder), videoFolder);

     let meta = await getFormat(videos);
     meta = duplicateMeta(meta); // get multiple portions of videos

     fs.writeFileSync(configFile, createConfig(meta));

     const mpeg = ffmpeg();
     mpeg.input(configFile)
       .inputOptions(['-f concat', '-safe 0'])
       .outputOptions('-c copy')
       .save(finalVideo);
    })();

    Video files formats :

    { streams:
      [ { index: 0,
          codec_name: 'h264',
          codec_long_name: 'H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10',
          profile: 'High',
          codec_type: 'video',
          codec_time_base: '1001/60000',
          codec_tag_string: 'avc1',
          codec_tag: '0x31637661',
          width: 1920,
          height: 1080,
          coded_width: 1920,
          coded_height: 1088,
          has_b_frames: 2,
          sample_aspect_ratio: '1:1',
          display_aspect_ratio: '16:9',
          pix_fmt: 'yuv420p',
          level: 40,
          color_range: 'tv',
          color_space: 'bt709',
          color_transfer: 'bt709',
          color_primaries: 'bt709',
          chroma_location: 'left',
          field_order: 'unknown',
          timecode: 'N/A',
          refs: 1,
          is_avc: 'true',
          nal_length_size: 4,
          id: 'N/A',
          r_frame_rate: '30000/1001',
          avg_frame_rate: '30000/1001',
          time_base: '1/30000',
          start_pts: 0,
          start_time: 0,
          duration_ts: 4936900,
          duration: 164.563333,
          bit_rate: 4323409,
          max_bit_rate: 'N/A',
          bits_per_raw_sample: 8,
          nb_frames: 4932,
          nb_read_frames: 'N/A',
          nb_read_packets: 'N/A',
          tags: [Object],
          disposition: [Object] },
        { index: 1,
          codec_name: 'aac',
          codec_long_name: 'AAC (Advanced Audio Coding)',
          profile: 'LC',
          codec_type: 'audio',
          codec_time_base: '1/48000',
          codec_tag_string: 'mp4a',
          codec_tag: '0x6134706d',
          sample_fmt: 'fltp',
          sample_rate: 48000,
          channels: 2,
          channel_layout: 'stereo',
          bits_per_sample: 0,
          id: 'N/A',
          r_frame_rate: '0/0',
          avg_frame_rate: '0/0',
          time_base: '1/48000',
          start_pts: 0,
          start_time: 0,
          duration_ts: 7899120,
          duration: 164.565,
          bit_rate: 256000,
          max_bit_rate: 263232,
          bits_per_raw_sample: 'N/A',
          nb_frames: 7714,
          nb_read_frames: 'N/A',
          nb_read_packets: 'N/A',
          tags: [Object],
          disposition: [Object] } ],
     format:
      { filename: '/media/trex/safe1/Development/app/videos/test/417912400.mp4',
        nb_streams: 2,
        nb_programs: 0,
        format_name: 'mov,mp4,m4a,3gp,3g2,mj2',
        format_long_name: 'QuickTime / MOV',
        start_time: 0,
        duration: 164.565,
        size: 94298844,
        bit_rate: 4584150,
        probe_score: 100,
        tags:
         { major_brand: 'mp42',
           minor_version: '0',
           compatible_brands: 'mp42mp41isomavc1',
           creation_time: '2015-09-21T19:11:21.000000Z' } },
     chapters: [] }
    { streams:
      [ { index: 0,
          codec_name: 'h264',
          codec_long_name: 'H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10',
          profile: 'High',
          codec_type: 'video',
          codec_time_base: '1001/48000',
          codec_tag_string: 'avc1',
          codec_tag: '0x31637661',
          width: 2560,
          height: 1440,
          coded_width: 2560,
          coded_height: 1440,
          has_b_frames: 2,
          sample_aspect_ratio: '1:1',
          display_aspect_ratio: '16:9',
          pix_fmt: 'yuv420p',
          level: 51,
          color_range: 'tv',
          color_space: 'bt709',
          color_transfer: 'bt709',
          color_primaries: 'bt709',
          chroma_location: 'left',
          field_order: 'unknown',
          timecode: 'N/A',
          refs: 1,
          is_avc: 'true',
          nal_length_size: 4,
          id: 'N/A',
          r_frame_rate: '24000/1001',
          avg_frame_rate: '24000/1001',
          time_base: '1/24000',
          start_pts: 0,
          start_time: 0,
          duration_ts: 4206200,
          duration: 175.258333,
          bit_rate: 11891834,
          max_bit_rate: 'N/A',
          bits_per_raw_sample: 8,
          nb_frames: 4202,
          nb_read_frames: 'N/A',
          nb_read_packets: 'N/A',
          tags: [Object],
          disposition: [Object] },
        { index: 1,
          codec_name: 'aac',
          codec_long_name: 'AAC (Advanced Audio Coding)',
          profile: 'LC',
          codec_type: 'audio',
          codec_time_base: '1/48000',
          codec_tag_string: 'mp4a',
          codec_tag: '0x6134706d',
          sample_fmt: 'fltp',
          sample_rate: 48000,
          channels: 2,
          channel_layout: 'stereo',
          bits_per_sample: 0,
          id: 'N/A',
          r_frame_rate: '0/0',
          avg_frame_rate: '0/0',
          time_base: '1/48000',
          start_pts: 0,
          start_time: 0,
          duration_ts: 8414160,
          duration: 175.295,
          bit_rate: 256000,
          max_bit_rate: 262152,
          bits_per_raw_sample: 'N/A',
          nb_frames: 8217,
          nb_read_frames: 'N/A',
          nb_read_packets: 'N/A',
          tags: [Object],
          disposition: [Object] } ],
     format:
      { filename: '/media/trex/safe1/Development/app/videos/test/440386842.mp4',
        nb_streams: 2,
        nb_programs: 0,
        format_name: 'mov,mp4,m4a,3gp,3g2,mj2',
        format_long_name: 'QuickTime / MOV',
        start_time: 0,
        duration: 175.295,
        size: 266214940,
        bit_rate: 12149345,
        probe_score: 100,
        tags:
         { major_brand: 'mp42',
           minor_version: '0',
           compatible_brands: 'mp42mp41isomavc1',
           creation_time: '2015-11-15T19:30:49.000000Z' } },
     chapters: [] }
  • Best approach to real time http streaming to HTML5 video client

    12 octobre 2016, par deandob

    I’m really stuck trying to understand the best way to stream real time output of ffmpeg to a HTML5 client using node.js, as there are a number of variables at play and I don’t have a lot of experience in this space, having spent many hours trying different combinations.

    My use case is :

    1) IP video camera RTSP H.264 stream is picked up by FFMPEG and remuxed into a mp4 container using the following FFMPEG settings in node, output to STDOUT. This is only run on the initial client connection, so that partial content requests don’t try to spawn FFMPEG again.

    liveFFMPEG = child_process.spawn("ffmpeg", [
                   "-i", "rtsp://admin:12345@192.168.1.234:554" , "-vcodec", "copy", "-f",
                   "mp4", "-reset_timestamps", "1", "-movflags", "frag_keyframe+empty_moov",
                   "-"   // output to stdout
                   ],  {detached: false});

    2) I use the node http server to capture the STDOUT and stream that back to the client upon a client request. When the client first connects I spawn the above FFMPEG command line then pipe the STDOUT stream to the HTTP response.

    liveFFMPEG.stdout.pipe(resp);

    I have also used the stream event to write the FFMPEG data to the HTTP response but makes no difference

    xliveFFMPEG.stdout.on("data",function(data) {
           resp.write(data);
    }

    I use the following HTTP header (which is also used and working when streaming pre-recorded files)

    var total = 999999999         // fake a large file
    var partialstart = 0
    var partialend = total - 1

    if (range !== undefined) {
       var parts = range.replace(/bytes=/, "").split("-");
       var partialstart = parts[0];
       var partialend = parts[1];
    }

    var start = parseInt(partialstart, 10);
    var end = partialend ? parseInt(partialend, 10) : total;   // fake a large file if no range reques

    var chunksize = (end-start)+1;

    resp.writeHead(206, {
                     'Transfer-Encoding': 'chunked'
                    , 'Content-Type': 'video/mp4'
                    , 'Content-Length': chunksize // large size to fake a file
                    , 'Accept-Ranges': 'bytes ' + start + "-" + end + "/" + total
    });

    3) The client has to use HTML5 video tags.

    I have no problems with streaming playback (using fs.createReadStream with 206 HTTP partial content) to the HTML5 client a video file previously recorded with the above FFMPEG command line (but saved to a file instead of STDOUT), so I know the FFMPEG stream is correct, and I can even correctly see the video live streaming in VLC when connecting to the HTTP node server.

    However trying to stream live from FFMPEG via node HTTP seems to be a lot harder as the client will display one frame then stop. I suspect the problem is that I am not setting up the HTTP connection to be compatible with the HTML5 video client. I have tried a variety of things like using HTTP 206 (partial content) and 200 responses, putting the data into a buffer then streaming with no luck, so I need to go back to first principles to ensure I’m setting this up the right way.

    Here is my understanding of how this should work, please correct me if I’m wrong :

    1) FFMPEG should be setup to fragment the output and use an empty moov (FFMPEG frag_keyframe and empty_moov mov flags). This means the client does not use the moov atom which is typically at the end of the file which isn’t relevant when streaming (no end of file), but means no seeking possible which is fine for my use case.

    2) Even though I use MP4 fragments and empty MOOV, I still have to use HTTP partial content, as the HTML5 player will wait until the entire stream is downloaded before playing, which with a live stream never ends so is unworkable.

    3) I don’t understand why piping the STDOUT stream to the HTTP response doesn’t work when streaming live yet if I save to a file I can stream this file easily to HTML5 clients using similar code. Maybe it’s a timing issue as it takes a second for the FFMPEG spawn to start, connect to the IP camera and send chunks to node, and the node data events are irregular as well. However the bytestream should be exactly the same as saving to a file, and HTTP should be able to cater for delays.

    4) When checking the network log from the HTTP client when streaming a MP4 file created by FFMPEG from the camera, I see there are 3 client requests : A general GET request for the video, which the HTTP server returns about 40Kb, then a partial content request with a byte range for the last 10K of the file, then a final request for the bits in the middle not loaded. Maybe the HTML5 client once it receives the first response is asking for the last part of the file to load the MP4 MOOV atom ? If this is the case it won’t work for streaming as there is no MOOV file and no end of the file.

    5) When checking the network log when trying to stream live, I get an aborted initial request with only about 200 bytes received, then a re-request again aborted with 200 bytes and a third request which is only 2K long. I don’t understand why the HTML5 client would abort the request as the bytestream is exactly the same as I can successfully use when streaming from a recorded file. It also seems node isn’t sending the rest of the FFMPEG stream to the client, yet I can see the FFMPEG data in the .on event routine so it is getting to the FFMPEG node HTTP server.

    6) Although I think piping the STDOUT stream to the HTTP response buffer should work, do I have to build an intermediate buffer and stream that will allow the HTTP partial content client requests to properly work like it does when it (successfully) reads a file ? I think this is the main reason for my problems however I’m not exactly sure in Node how to best set that up. And I don’t know how to handle a client request for the data at the end of the file as there is no end of file.

    7) Am I on the wrong track with trying to handle 206 partial content requests, and should this work with normal 200 HTTP responses ? HTTP 200 responses works fine for VLC so I suspect the HTML5 video client will only work with partial content requests ?

    As I’m still learning this stuff its difficult to work through the various layers of this problem (FFMPEG, node, streaming, HTTP, HTML5 video) so any pointers will be greatly appreciated. I have spent hours researching on this site and the net, and I have not come across anyone who has been able to do real time streaming in node but I can’t be the first, and I think this should be able to work (somehow !).