Recherche avancée

Médias (1)

Mot : - Tags -/artwork

Autres articles (82)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

Sur d’autres sites (7228)

  • ffmpeg - duration usage in input text file

    12 mai 2018, par Voicu

    I am trying to use ffmpeg to concatenate video segments with some black screen. To do that I’ve first generated a blank 10-second video (no audio track) with :

    $ ffmpeg -f lavfi -i color=black:s=320x240:r=1 -f lavfi -i anullsrc -t 10 -vcodec libvpx -an blank.mkv

    I then created the simplest possible scenario within input.txt file (contents below) in order to have three seconds of black screen followed by some video (no audio track) :

    file 'blank.mkv'
    duration 3
    file 'video_example.mkv'

    And, finally, ran the following ffmpeg command to concat the contents of that input file :

    $ ffmpeg -f concat -i input.txt -codec:v copy -codec:a copy output.mkv

    The issue that I have is that the duration 3 is not considered, so the final video still has ten seconds of black frames (instead of three) followed by my video. And also "Non-monotonous DTS in output stream 0:0 ..." message is shown when using duration x in the file. If I remove duration the warnings are gone and getting the 10-second black screen first output as well.

    Full output of the ffmpeg concat command :

    $ ffmpeg -hide_banner -f concat -i input.txt -codec:v copy -codec:a copy output.mkv
    Input #0, concat, from 'input.txt':
     Duration: N/A, start: 0.000000, bitrate: N/A
       Stream #0:0: Video: vp8, yuv420p(progressive), 320x240, SAR 1:1 DAR 4:3, 1 fps, 1 tbr, 1k tbn, 1k tbc
       Metadata:
         ENCODER         : Lavc57.107.100 libvpx
         DURATION        : 00:00:10.000000000
    File 'output.mkv' already exists. Overwrite ? [y/N] y
    Output #0, matroska, to 'output.mkv':
     Metadata:
       encoder         : Lavf57.83.100
       Stream #0:0: Video: vp8 (VP80 / 0x30385056), yuv420p(progressive), 320x240 [SAR 1:1 DAR 4:3], q=2-31, 1 fps, 1 tbr, 1k tbn, 1k tbc
       Metadata:
         ENCODER         : Lavc57.107.100 libvpx
         DURATION        : 00:00:10.000000000
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
    Press [q] to stop, [?] for help
    [concat @ 000000000031a440] DTS 3000 < 9000 out of order
    [matroska @ 0000000000328420] Non-monotonous DTS in output stream 0:0; previous: 9000, current: 3000; changing to 9000. This may result in incorrect timestamps in the output file.
    [matroska @ 0000000000328420] Non-monotonous DTS in output stream 0:0; previous: 9000, current: 4001; changing to 9000. This may result in incorrect timestamps in the output file.
    [matroska @ 0000000000328420] Non-monotonous DTS in output stream 0:0; previous: 9000, current: 4998; changing to 9000. This may result in incorrect timestamps in the output file.
    [matroska @ 0000000000328420] Non-monotonous DTS in output stream 0:0; previous: 9000, current: 6004; changing to 9000. This may result in incorrect timestamps in the output file.
    [matroska @ 0000000000328420] Non-monotonous DTS in output stream 0:0; previous: 9000, current: 7002; changing to 9000. This may result in incorrect timestamps in the output file.
    [matroska @ 0000000000328420] Non-monotonous DTS in output stream 0:0; previous: 9000, current: 8005; changing to 9000. This may result in incorrect timestamps in the output file.
    frame= 5794 fps=0.0 q=-1.0 Lsize=    7109kB time=01:37:09.70 bitrate=  10.0kbits/s speed=5.16e+004x
    video:7043kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.926229%

    Any idea what am I doing wrong ? The warning seems to hint towards the issue here.

    Other possibly useful info :

    $ ffprobe -hide_banner blank.mkv
    Input #0, matroska,webm, from 'blank.mkv':
     Metadata:
       ENCODER         : Lavf57.83.100
     Duration: 00:00:10.00, start: 0.000000, bitrate: 1 kb/s
       Stream #0:0: Video: vp8, yuv420p(progressive), 320x240, SAR 1:1 DAR 4:3, 1 fps, 1 tbr, 1k tbn, 1k tbc (default)
       Metadata:
         ENCODER         : Lavc57.107.100 libvpx
         DURATION        : 00:00:10.000000000

    $ ffprobe -hide_banner video_example.mkv
    Input #0, matroska,webm, from 'video_example.mkv':
     Metadata:
       encoder         : GStreamer matroskamux version 1.8.1.1
       creation_time   : 2018-05-04T17:57:04.000000Z
     Duration: 01:37:08.70, start: 15434.269000, bitrate: 9 kb/s
       Stream #0:0(eng): Video: vp8, yuv420p(progressive), 320x240, SAR 1:1 DAR 4:3, 1 fps, 1 tbr, 1k tbn, 1k tbc (default)
       Metadata:
         title           : Video

    $ ffmpeg -v
    ffmpeg version 3.4.2 Copyright (c) 2000-2018 the FFmpeg developers
     built with gcc 7.3.0 (GCC)
  • FFmpeg - Multiple videos with 4 areas and different play times

    25 mai 2018, par Robert Smith

    I have videos as follows

    video   time
    ======= =========
    Area 1:
    video1a    0-2000
    video1b 2500-3000
    video1c 3000-4000

    Area 2:
    video2a  300- 400
    video2b  800- 900

    Area 3:
    video3a  400- 500
    video3b  700- 855

    Area 4:
    video4a  400- 500
    video4b  800- 900

    Basically these are security camera outputs and should display in 4 areas :

    So far I have the following :

    ffmpeg
       -i 1.avi -i 2.avi -i 3.avi -i 4.avi
       -filter_complex "
           nullsrc=size=640x480 [base];
           [0:v] setpts=PTS-STARTPTS, scale=320x240 [upperleft];
           [1:v] setpts=PTS-STARTPTS, scale=320x240 [upperright];
           [2:v] setpts=PTS-STARTPTS, scale=320x240 [lowerleft];
           [3:v] setpts=PTS-STARTPTS, scale=320x240 [lowerright];
           [base][upperleft] overlay=shortest=1 [tmp1];
           [tmp1][upperright] overlay=shortest=1:x=320 [tmp2];
           [tmp2][lowerleft] overlay=shortest=1:y=240 [tmp3];
           [tmp3][lowerright] overlay=shortest=1:x=320:y=240
       "
       -c:v libx264 output.mp4

    But there are two things I am missing :

    • The above is only for 4 video files, I need a way to add additional files to each area (for example video1b should play at its corresponding time after video1a in the same area)
    • How do I specify the beginning/ending time as shown above for each file ?
  • How to check when ffmpeg completes a task ?

    25 mai 2018, par Andrew

    I’m just learning how to use ffmpeg a few hours ago to generate video thumbnails.

    These are some results :

    1

    2

    I’d used the same size (width - height) to Youtube’s. Each image contains max 25 thumbnails (5x5) with the size 160x90.

    Everything looks good until :

    public async Task GetVideoThumbnailsAsync(string videoPath, string videoId)
    {
       byte thumbnailWidth = 160;
       byte thumbnailHeight = 90;

       string fps = "1/2";

       videoPath = Path.Combine(_environment.WebRootPath, videoPath);

       string videoThumbnailsPath = Path.Combine(_environment.WebRootPath, $"assets/images/video_thumbnails/{videoId}");
       string outputImagePath = Path.Combine(videoThumbnailsPath, "item_%d.jpg");

       Directory.CreateDirectory(videoThumbnailsPath);

       using (var ffmpeg = new Process())
       {
           ffmpeg.StartInfo.Arguments = $" -i {videoPath} -vf fps={fps} -s {thumbnailWidth}x{thumbnailHeight} {outputImagePath}";
           ffmpeg.StartInfo.FileName = Path.Combine(_environment.ContentRootPath, "FFmpeg/ffmpeg.exe");
           ffmpeg.Start();
       }

       await Task.Delay(3000);

       await GenerateThumbnailsAsync(videoThumbnailsPath, videoId);
    }

    I’m getting a trouble with the line :

    await Task.Delay(3000);

    When I learn the way to use ffmpeg, they didn’t mention about it. After some hours failed, I notice that :

    An mp4 video (1 min 31 sec - 1.93Mb) requires some delay time 1000ms. And other, an mp4 video (1 min 49 sec - 7.25Mb) requires some delay time 3000ms.

    If I don’t use Task.Delay and try to get all the files immediately, it would return 0 (there was no file in the directory).

    Plus, each file which has a difference length to the others requires a difference delay time. I don’t know how to calculate it.

    And my question is : How to check when the task has completed ?

    P/s : I don’t mean to relate to javascript, but in js, there is something called Promise :

    var promise = new Promise(function (done) {
       var todo = function () {
           done();
       };

       todo();
    });

    promise.then(function () {
       console.log('DONE...');
    });

    I want to edit the code like that.

    Thank you !