Recherche avancée

Médias (1)

Mot : - Tags -/MediaSPIP 0.2

Autres articles (96)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

  • Soumettre bugs et patchs

    10 avril 2011

    Un logiciel n’est malheureusement jamais parfait...
    Si vous pensez avoir mis la main sur un bug, reportez le dans notre système de tickets en prenant bien soin de nous remonter certaines informations pertinentes : le type de navigateur et sa version exacte avec lequel vous avez l’anomalie ; une explication la plus précise possible du problème rencontré ; si possibles les étapes pour reproduire le problème ; un lien vers le site / la page en question ;
    Si vous pensez avoir résolu vous même le bug (...)

Sur d’autres sites (9033)

  • Problem using -to, combining -to with framerate change

    21 avril 2019, par LLL

    I have a question regarding the use of the -to option :

    In FFmpeg version 3 (at least 3.2.12 included in Debian 9), -to was an output only option. In more recent version it seems to be accepted as an input option as well (as written here https://www.ffmpeg.org/ffmpeg.html#Main-options) but I run into limitations when using in conjunction with -r.
    What I expect, if I increase the framerate of the input, is that the output will be shorter. But it is not the case if I’m not using the whole input :

    ffmpeg -r 30 -to 10 -i my18fpsFile.avi out.mp4

    out.mp4 will be 10 seconds long, so the input has been read further than 10 seconds in.
    In other terms, even as an input option, it seems to behave like an output option.

    Tested with ffmpeg version N-48102-g7cab5471b2-static https://johnvansickle.com/ffmpeg/ on Debian Stretch.

    Thanks for your advice or confirmation that this is not normal.

    Edit : Thank you Gyan for your reply and your explanation.

    While it seems to work, some testing makes me remember why I wasn’t satisfied with this solution :
    Stating -r as an input option forces the framerate for the reading of the input and ensures one frame in the input is one frame in the output. This is what I want.

    Using yours solution with the filter on the output does cause dropped frames sometimes (which I have never had with -r).
    However, some more testing draws me to the conclusion that the dropped frames problem happens only with an mkv container ! Your solution works fine with other containers (as far as I have tested), but with mkv (and various video codecs, it doesn’t seem to make a difference), I get a lot of dropped frames.
    Using -r with mkv causes no dropped frames.

    In conclusion, I’m not sure how to generalize this solution, as avoiding mkv is annoying for me.
    But thank you, and if anyone has some more experience or ideas, please chime in.

    Adding logs :
    Note the line :
    [avi @ 0x6cf6880] parser not found for codec rawvideo, packets or times may be invalid
    I guess it is the cause of the problem ? Is there a solution if the files are raw avi with no index ?

    Although the warning is the same, I don’t get dropped frames if I use the exact same command only with f.mov instead of mkv.

    ffmpeg -v 40 -to 10 -i f.avi -vf setpts=N/30/TB -r 30 f.mkv
    ffmpeg version N-48102-g7cab5471b2-static https://johnvansickle.com/ffmpeg/  Copyright (c) 2000-2019 the FFmpeg developers
     built with gcc 6.3.0 (Debian 6.3.0-18+deb9u1) 20170516
     configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc-6 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libdav1d --enable-libxvid --enable-libzvbi --enable-libzimg
     libavutil      56. 26.100 / 56. 26.100
     libavcodec     58. 46.100 / 58. 46.100
     libavformat    58. 26.100 / 58. 26.100
     libavdevice    58.  6.101 / 58.  6.101
     libavfilter     7. 48.100 /  7. 48.100
     libswscale      5.  4.100 /  5.  4.100
     libswresample   3.  4.100 /  3.  4.100
     libpostproc    55.  4.100 / 55.  4.100
    [avi @ 0x6cf6880] parser not found for codec rawvideo, packets or times may be invalid.
       Last message repeated 1 times
    Input #0, avi, from 'f.avi':
     Duration: 00:01:11.62, start: 0.000000, bitrate: 846067 kb/s
       Stream #0:0: Video: rawvideo, 1 reference frame, bgr24, 1632x1200, 846638 kb/s, 18 fps, 18 tbr, 18 tbn, 18 tbc
    Stream mapping:
     Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
    Press [q] to stop, [?] for help
    [graph 0 input from stream 0:0 @ 0x6d0fc00] w:1632 h:1200 pixfmt:bgr24 tb:1000/17999 fr:17999/1000 sar:0/1 sws_param:flags=2
    [auto_scaler_0 @ 0x6d105c0] w:iw h:ih flags:'bicubic' interl:0
    [format @ 0x6cfe5c0] auto-inserting filter 'auto_scaler_0' between the filter 'Parsed_setpts_0' and the filter 'format'
    [trim_in_0_0 @ 0x6d109c0] TB:0.055559 FRAME_RATE:17.999000 SAMPLE_RATE:nan
    [auto_scaler_0 @ 0x6d105c0] w:1632 h:1200 fmt:bgr24 sar:0/1 -> w:1632 h:1200 fmt:yuv444p sar:0/1 flags:0x4
    [libx264 @ 0x6cfb2c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
    [libx264 @ 0x6cfb2c0] profile High 4:4:4 Predictive, level 4.0, 4:4:4, 8-bit
    [libx264 @ 0x6cfb2c0] 264 - core 157 r2935 545de2f - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x1:0x111 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, matroska, to 'f.mkv':
     Metadata:
       encoder         : Lavf58.26.100
       Stream #0:0: Video: h264 (libx264), 1 reference frame (H264 / 0x34363248), yuv444p, 1632x1200, q=-1--1, 30 fps, 1k tbn, 30 tbc
       Metadata:
         encoder         : Lavc58.46.100 libx264
       Side data:
         cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
    Past duration 0.666481 too large
    *** dropping frame 5 from stream 0 at ts 3
    Past duration 0.666191 too large
    *** dropping frame 9 from stream 0 at ts 8
    Past duration 0.665916 too large
    *** dropping frame 13 from stream 0 at ts 13
    Past duration 0.665642 too large
    *** dropping frame 17 from stream 0 at ts 18
    Past duration 0.665367 too large
    *** dropping frame 21 from stream 0 at ts 23
    Past duration 0.665092 too large
    *** dropping frame 25 from stream 0 at ts 28
    Past duration 0.664803 too large
    *** dropping frame 29 from stream 0 at ts 33me=00:00:00.00 bitrate=N/A dup=0 drop=6 speed=   0x    
    Past duration 0.664528 too large
    *** dropping frame 33 from stream 0 at ts 38
    Past duration 0.664253 too large
    *** dropping frame 37 from stream 0 at ts 43
    Past duration 0.663979 too large
    *** dropping frame 41 from stream 0 at ts 48
    Past duration 0.663689 too large
    *** dropping frame 45 from stream 0 at ts 53
    Past duration 0.663414 too large      1kB time=00:00:00.00 bitrate=N/A dup=0 drop=11 speed=   0x    
    *** dropping frame 49 from stream 0 at ts 58
    Past duration 0.663139 too large
    *** dropping frame 53 from stream 0 at ts 63ime=00:00:00.00 bitrate=5768.0kbits/s dup=0 drop=12 speed=0.00049x    
    Past duration 0.662865 too large
    *** dropping frame 57 from stream 0 at ts 68ime=00:00:00.13 bitrate=  43.0kbits/s dup=0 drop=13 speed=0.0514x    
    Past duration 0.662590 too large
    *** dropping frame 61 from stream 0 at ts 73ime=00:00:00.30 bitrate=  19.2kbits/s dup=0 drop=14 speed=0.094x    
    Past duration 0.662300 too large
    *** dropping frame 65 from stream 0 at ts 78ime=00:00:00.46 bitrate=  12.3kbits/s dup=0 drop=15 speed=0.125x    
    Past duration 0.662025 too large       1kB time=00:00:00.60 bitrate=   9.6kbits/s dup=0 drop=16 speed=0.142x    
    *** dropping frame 69 from stream 0 at ts 83
    Past duration 0.661751 too large       1kB time=00:00:00.76 bitrate=   7.5kbits/s dup=0 drop=17 speed=0.162x    
    *** dropping frame 73 from stream 0 at ts 88
    Past duration 0.661476 too large       1kB time=00:00:00.93 bitrate=   6.2kbits/s dup=0 drop=18 speed=0.175x    
    *** dropping frame 77 from stream 0 at ts 93
    Past duration 0.661201 too large
    *** dropping frame 81 from stream 0 at ts 98ime=00:00:01.13 bitrate=8036.9kbits/s dup=0 drop=19 speed=0.193x    
    Past duration 0.660912 too large
    *** dropping frame 85 from stream 0 at ts 103me=00:00:01.30 bitrate=7005.3kbits/s dup=0 drop=20 speed=0.204x    
    Past duration 0.660637 too large
    *** dropping frame 89 from stream 0 at ts 108me=00:00:01.46 bitrate=6208.3kbits/s dup=0 drop=21 speed=0.21x    
    Past duration 0.660362 too large
    *** dropping frame 93 from stream 0 at ts 113
    Past duration 0.660088 too large    1113kB time=00:00:01.76 bitrate=5154.9kbits/s dup=0 drop=23 speed=0.235x    
    *** dropping frame 97 from stream 0 at ts 118
    Past duration 0.659813 too large    1113kB time=00:00:01.83 bitrate=4969.4kbits/s dup=0 drop=24 speed=0.229x    
    *** dropping frame 101 from stream 0 at ts 123
    Past duration 0.659523 too large
    *** dropping frame 105 from stream 0 at ts 128e=00:00:02.13 bitrate=4270.8kbits/s dup=0 drop=25 speed=0.235x    
    Past duration 0.659248 too large
    *** dropping frame 109 from stream 0 at ts 133
    Past duration 0.658974 too large    2399kB time=00:00:02.43 bitrate=8075.2kbits/s dup=0 drop=27 speed=0.252x    
    *** dropping frame 113 from stream 0 at ts 138
    Past duration 0.658699 too large
    *** dropping frame 117 from stream 0 at ts 143e=00:00:02.63 bitrate=7462.1kbits/s dup=0 drop=28 speed=0.25x    
    Past duration 0.658424 too large
    *** dropping frame 121 from stream 0 at ts 148
    Past duration 0.658134 too large    2399kB time=00:00:02.93 bitrate=6699.1kbits/s dup=0 drop=30 speed=0.266x    
    *** dropping frame 125 from stream 0 at ts 153
    Past duration 0.657860 too large
    *** dropping frame 129 from stream 0 at ts 158e=00:00:03.13 bitrate=6271.6kbits/s dup=0 drop=31 speed=0.265x    
    Past duration 0.657585 too large
    *** dropping frame 133 from stream 0 at ts 163
    Past duration 0.657310 too large    2399kB time=00:00:03.33 bitrate=5895.4kbits/s dup=0 drop=33 speed=0.271x    
    *** dropping frame 137 from stream 0 at ts 168e=00:00:03.46 bitrate=5667.6kbits/s dup=0 drop=33 speed=0.269x    
    Past duration 0.657036 too large
    *** dropping frame 141 from stream 0 at ts 173
    Past duration 0.656746 too large
    No more output streams to write to, finishing.e=00:00:03.80 bitrate=5171.0kbits/s dup=0 drop=35 speed=0.279x    
    frame=  145 fps=8.0 q=-1.0 Lsize=    4590kB time=00:00:05.90 bitrate=6372.0kbits/s dup=0 drop=35 speed=0.326x    
    video:4588kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.042357%
    Input file #0 (f.avi):
     Input stream #0:0 (video): 181 packets read (1063411200 bytes); 181 frames decoded;
     Total: 181 packets (1063411200 bytes) demuxed
    Output file #0 (f.mkv):
     Output stream #0:0 (video): 145 frames encoded; 145 packets muxed (4698140 bytes);
     Total: 145 packets (4698140 bytes) muxed
    [AVIOContext @ 0x6cfe940] Statistics: 20 seeks, 33 writeouts
    [libx264 @ 0x6cfb2c0] frame I:4     Avg QP:25.17  size: 84331
    [libx264 @ 0x6cfb2c0] frame P:40    Avg QP:26.64  size: 47807
    [libx264 @ 0x6cfb2c0] frame B:101   Avg QP:27.85  size: 24236
    [libx264 @ 0x6cfb2c0] consecutive B-frames:  4.1%  8.3%  2.1% 85.5%
    [libx264 @ 0x6cfb2c0] mb I  I16..4: 29.5%  0.0% 70.5%
    [libx264 @ 0x6cfb2c0] mb P  I16..4: 12.0%  0.0% 20.1%  P16..4: 38.5% 16.5%  4.5%  0.0%  0.0%    skip: 8.4%
    [libx264 @ 0x6cfb2c0] mb B  I16..4:  1.8%  0.0%  2.7%  B16..8: 46.5% 11.4%  1.6%  direct: 7.6%  skip:28.4%  L0:40.6% L1:55.6% BI: 3.8%
    [libx264 @ 0x6cfb2c0] coded y,u,v intra: 55.9% 11.8% 19.2% inter: 22.1% 1.1% 2.6%
    [libx264 @ 0x6cfb2c0] i16 v,h,dc,p: 30% 15%  7% 48%
    [libx264 @ 0x6cfb2c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 26% 16% 18%  7%  8%  8%  7%  7%  3%
    [libx264 @ 0x6cfb2c0] Weighted P-Frames: Y:17.5% UV:10.0%
    [libx264 @ 0x6cfb2c0] ref P L0: 54.3% 15.8% 20.0%  8.8%  1.2%
    [libx264 @ 0x6cfb2c0] ref B L0: 87.9%  9.3%  2.8%
    [libx264 @ 0x6cfb2c0] ref B L1: 95.9%  4.1%
    [libx264 @ 0x6cfb2c0] kb/s:6263.27
    [AVIOContext @ 0x6cff580] Statistics: 1063666996 bytes read, 4 seeks
  • Why does my lambda doesn't end with an expected request end event ? [on hold]

    16 avril 2019, par Nachum Freedman
    const now = (...a) =>
     console.log(...a, Math.floor(new Date().getTime() / 1000) % 3600);

    exports.handler = (event, context, callback) => {
     console.log("PROCESS START");

     const FROM_BUCKET = event.Records[0].s3.bucket.name;
     const Key = decodeURIComponent(
       event.Records[0].s3.object.key.replace(/\+/g, " ")
     );
     const uploadKey = Key.replace(/.webm/, ".mp4");

     console.log("FROM BUCKET", FROM_BUCKET);
     console.log("Recived key", Key);

     const slicedFilename = Key.slice(-4) !== "webm" ? Key.slice(-4) : ".webm";
     const extension =
       slicedFilename === ".mp4"
         ? ".mp4"
         : slicedFilename === ".pcm"
         ? ".pcm"
         : slicedFilename === ".webm"
         ? ".webm"
         : console.log("THE FILE NAME IS INCORRECT PLEASE CHECK @:", Key);

     console.log("Extension detected is", extension);

     const downloadKey =
       Key.slice(-4) !== "webm"
         ? Key.slice(0, -4) + extension
         : Key.slice(0, -5) + extension;

     const downloadParams = { Bucket: FROM_BUCKET, Key: downloadKey };

     console.log("Downloading file from S3", downloadParams);

     const mobileDownloadExtension =
       extension === ".mp4" ? ".pcm" : extension === ".pcm" ? ".mp4" : null;

     const mobileDownloadParams = {
       Bucket: FROM_BUCKET,
       Key: Key.slice(0, -4) + mobileDownloadExtension
     };

     console.log("Downloading file from S3", mobileDownloadParams);

     const tmpNamespace = Math.random();

     const isMobile = extension === ".mp4" || extension === ".pcm";
     let firstDigit = 0;
     let restOfTheOffset = 0;
     if (isMobile) {
       console.log(
         "If apply, second key to download is a file type",
         mobileDownloadExtension
       );
       const offsetTime = Key.slice(
         Key.lastIndexOf("_") + 1,
         Key.indexOf(".")
       ).replace("-", "");
       if (offsetTime > 999) {
         firstDigit = 1;
         restOfTheOffset = offsetTime.slice(1, offsetTime.length);
       } else {
         firstDigit = 0;
         restOfTheOffset = offsetTime;
       }
     }
     console.log("FIle recieved from Mobile?", isMobile);

     Promise.all([
       // download file from s3
       new Promise((resolve, reject) =>
         s3.getObject(downloadParams, (err, response) => {
           if (err) {
             console.error(
               "Error while downloading file from S3",
               downloadParams,
               err.code,
               "-",
               err.message
             );
             return reject(err);
           }
           console.log("Successfully downloaed file from S3", downloadParams);
           fs.writeFile(
             tmp + "/input" + tmpNamespace + extension,
             response.Body,
             err =>
               err
                 ? console.log(err.code, "-", err.message) || reject(err)
                 : console.log(tmp + "/input" + tmpNamespace + extension) ||
                   resolve()
           );
         })
       ),
       new Promise((resolve, reject) => {
         extension !== ".mp4" && extension !== ".pcm"
           ? resolve()
           : s3.getObject(mobileDownloadParams, (err, response) => {
               if (err) {
                 console.error(
                   "Error while downloading file from S3",
                   mobileDownloadParams,
                   err.code,
                   "-",
                   err.message
                 );
                 return reject(err);
               }
               console.log(
                 "Successfully downloaed file from S3",
                 mobileDownloadParams
               );
               fs.writeFile(
                 tmp + "/input" + tmpNamespace + mobileDownloadExtension,
                 response.Body,
                 err =>
                   err
                     ? console.log(err.code, "-", err.message) || reject(err)
                     : console.log(
                         tmp + "/input" + tmpNamespace + mobileDownloadExtension
                       ) || resolve()
               );
             });
       })
     ])
       .then(() =>
         Promise.all([
           // call the answerVideoReady -> PROCESSING mobileDownloadExtnesion is actually the second file we seek (if mp4 then pcm)
           // ,

           isMobile
             ? Promise.resolve()
                 .then(() => {
                   new Promise((resolve, reject) => {
                     console.log("CALLING VIDEO PROCESSING", Key);
                     videoProcessing(Key);
                     resolve();
                   });
                 })
                 .then(
                   () =>
                     // run ffmpeg
                     // ffmpeg -i tmp/input.mp4 -movflags faststart -acodec copy -vcodec copy output.mp4
                     now("FFMPEG CONVERT PCM TO WAV START") ||
                     new Promise((resolve, reject) =>
                       spawn("./ffmpeg/ffmpeg", [
                         "-f",
                         "s16le",
                         "-ar",
                         "16000",
                         "-ac",
                         "1",
                         "-i",
                         tmp + "/input" + tmpNamespace + ".pcm",
                         "-ar",
                         "44100",
                         "-ac",
                         "2",
                         tmp + "/input" + tmpNamespace + ".wav"
                       ]).on("close", code =>
                         console.log("FFMPEG CONVERT PCM TO WAV SUCCESS", code) ||
                         !code
                           ? resolve()
                           : reject()
                       )
                     )
                 )
                 .then(
                   () =>
                     // run ffmpeg
                     // ffmpeg -i tmp/input.mp4 -movflags faststart -acodec copy -vcodec copy output.mp4
    //more stuff
                     now("FFMPEG COMBINE MP4 AND WAV START") ||
                     new Promise(
                       (resolve, reject) =>
                         console.log(
                           [
                             "-i",
                             tmp + "/input" + tmpNamespace + ".mp4",
                             "-itsoffset",
                             "-" + firstDigit + "." + restOfTheOffset,
                             "-i",
                             tmp + "/input" + tmpNamespace + ".wav",
                             "-movflags",
                             "faststart",
                             "-filter_complex",
                             " [1:0] apad ",
                             "-shortest",
                             tmp + "/output" + tmpNamespace + ".mp4"
                           ].join(" ")
                         ) ||
                         spawn("./ffmpeg/ffmpeg", [
                           "-i",
                           tmp + "/input" + tmpNamespace + ".mp4",
                           "-itsoffset",
                           "-" + firstDigit + "." + restOfTheOffset,
                           "-i",
                           tmp + "/input" + tmpNamespace + ".wav",
                           "-movflags",
                           "faststart",
                           "-filter_complex",
                           " [1:0] apad ",
                           "-shortest",
                           tmp + "/output" + tmpNamespace + ".mp4"
                         ]).on("close", code =>
                           console.log(
                             "FFMPEG COMBINE MP4 AND WAV SUCCESS",
                             code
                           ) || !code
                             ? resolve()
                             : reject()
                         )
                     )
                 )
             : Promise.resolve()
                 .then(() => {
                   new Promise((resolve, reject) => {
                     console.log("CALLING VIDEO PROCESSING", Key);
                     videoProcessing(Key);
                     resolve();
                   });
                 })
                 .then(
                   () =>
                     now("FFMPEG SEND WEBM START") ||
                     new Promise((resolve, reject) => {
                       exec(
                         "./sed -i '1,4d;$d' " +
                           tmp +
                           "/input" +
                           tmpNamespace +
                           ".webm"
                       ).on(
                         "close",
                         code =>
                           console.log("FFMPEG SEND WEBM SUCCESS", code) ||
                           (!code ? resolve() : reject())
                       );
                     })
                 )
                 .then(
                   () =>
                     // run ffmpeg
                     // ffmpeg -i tmp/input.mp4 -movflags faststart -acodec copy -vcodec copy output.mp4
                     now("FFMPEG WEBM WEB READY START") ||
                     new Promise((resolve, reject) =>
                       (a => {
                         a.stdout.on("data", data => {
                           //console.log(`child stdout:\n${data}`);
                         });
                         a.stderr.on("data", data => {
                           //console.log(`child stdout:\n${data}`);
                         });
                         return a;
                       })(
                         spawn("./ffmpeg/ffmpeg", [
                           "-i",
                           tmp + "/input" + tmpNamespace + extension,
                           "-movflags",
                           "faststart",
                           "-acodec",
                           "aac",
                           "-vcodec",
                           "h264",
                           "-preset",
                           "slow",
                           "-crf",
                           "26",
                           "-r",
                           "25",
                           tmp + "/output" + tmpNamespace + ".mp4"
                         ])
                       ).on("close", code =>
                         console.log(
                           "FFMPEG WEBM WEB READY FINISHED WITH:",
                           code
                         ) || !code
                           ? resolve()
                           : reject()
                       )
                     )
                 )
         ])
       )
       .then(
         () =>
           new Promise((resolve, reject) =>
             // upload the output.mp4 to s3
             fs.readFile(
               tmp + "/output" + tmpNamespace + ".mp4",
               (err, filedata) => {
                 if (err) {
                   console.log("ERROR WHILE TRYING TO READ FILE", err);
                   throw err;
                 }
                 console.log("KEEEEYYY", uploadKey),
                   s3.putObject(
                     {
                       Bucket: TO_BUCKET,
                       Key: uploadKey,
                       Body: filedata
                     },
                     (err, response) => {
                       console.log(response);
                       if (err) {
                         console.log(
                           "ERROR WHILE UPLOADING FILE TO S3",
                           err,
                           response
                         );
                         return reject(err);
                       }
    //uploading file
                       console.log(
                         "Successfully uploaded file to " + TO_BUCKET,
                         Key
                       );
                       resolve();
                     }
                   );
               }
             )
           )

         // call the answerVideoReady -> COMPLETED, context.success  or ERROR, context.fail or error on set status to ERROR -> fail
       )
       .then(p =>
         videoCompleted(Key)
           .then(c => context.succeed())
           .catch(es => context.fail(es))
       )
       .catch(
         e =>
           console.log("catch for upload error with:", e) ||
           videoError(Key)
             .then(p => context.fail(e))
             .catch(ee => context.fail(ee))
       );
    };
  • ffmpeg record video plays too fast

    29 avril 2023, par Kris Xia

    I'm a college student and I am studying FFmpeg now.

    



    I have wrote a software that can record desktops and audio('virtual-audio-capturer') with FFmpeg.And I am now writing Audio and Video Synchronization.
I met some problems that video recording plays too fast.

    



    When I look for audio and video synchronization help on the Internet,I find a formula for calculating PTS :

    



    pts = n * ((1 / timbase)/ fps)

    



    When I use this formula,I find a phenomenon.

    



    1.The higher frame rate is,the faster the video playback speed.

    



    2.The slower the frame rate, the faster the video playback.

    



    Also I find while the framerate is 10,the video playback speed will be right.

    



    Why has this situation happened ?

    



    I have thought this question for three days. I really hope someone can help me solve this problem.

    



    I really appreciate the help.

    



    #include "stdafx.h"

#ifdef  __cplusplus
extern "C"
{
#endif
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#include "libswscale/swscale.h"
#include "libavdevice/avdevice.h"
#include "libavutil/audio_fifo.h"

#include "libavfilter/buffersink.h"
#include "libavfilter/buffersrc.h"
#include "libavutil/imgutils.h"
#include "libavutil/mathematics.h"
#include "libavutil/samplefmt.h"
#include "libavutil/time.h"
#include "libavutil/opt.h"
#include "libavutil/pixdesc.h"
#include "libavutil/file.h"
#include "libavutil/mem.h"
#include "libavutil/frame.h"
#include "libavfilter/avfilter.h"
#include "libswresample/swresample.h"

#pragma comment(lib, "avcodec.lib")
#pragma comment(lib, "avformat.lib")
#pragma comment(lib, "avutil.lib")
#pragma comment(lib, "avdevice.lib")
#pragma comment(lib, "avfilter.lib")

#pragma comment(lib, "avfilter.lib")
#pragma comment(lib, "postproc.lib")
#pragma comment(lib, "swresample.lib")
#pragma comment(lib, "swscale.lib")
#ifdef __cplusplus
};
#endif

AVFormatContext *pFormatCtx_Video = NULL, *pFormatCtx_Audio = NULL, *pFormatCtx_Out = NULL;

AVCodecContext *outVideoCodecCtx = NULL;
AVCodecContext *outAudioCodecCtx = NULL;

AVStream *pVideoStream = NULL, *pAudioStream = NULL;

AVCodec *outAVCodec;
AVCodec *outAudioCodec;

AVCodecContext  *pCodecCtx_Video;
AVCodec         *pCodec_Video;
AVFifoBuffer    *fifo_video = NULL;
AVAudioFifo     *fifo_audio = NULL;
int VideoIndex, AudioIndex;
int codec_id;

CRITICAL_SECTION AudioSection, VideoSection;



SwsContext *img_convert_ctx;
int frame_size = 0;

uint8_t *picture_buf = NULL, *frame_buf = NULL;

bool bCap = true;

DWORD WINAPI ScreenCapThreadProc( LPVOID lpParam );
DWORD WINAPI AudioCapThreadProc( LPVOID lpParam );

int OpenVideoCapture()
{
    AVInputFormat *ifmt=av_find_input_format("gdigrab");
    AVDictionary *options = NULL;
    av_dict_set(&options, "framerate", "60", NULL);
    if(avformat_open_input(&pFormatCtx_Video, "desktop", ifmt, &options)!=0)
    {
        printf("Couldn't open input stream.(无法打开视频输入流)\n");
        return -1;
    }
    if(avformat_find_stream_info(pFormatCtx_Video,NULL)<0)
    {
        printf("Couldn't find stream information.(无法获取视频流信息)\n");
        return -1;
    }
    if (pFormatCtx_Video->streams[0]->codec->codec_type != AVMEDIA_TYPE_VIDEO)
    {
        printf("Couldn't find video stream information.(无法获取视频流信息)\n");
        return -1;
    }
    pCodecCtx_Video = pFormatCtx_Video->streams[0]->codec;
    pCodec_Video = avcodec_find_decoder(pCodecCtx_Video->codec_id);
    if(pCodec_Video == NULL)
    {
        printf("Codec not found.(没有找到解码器)\n");
        return -1;
    }
    if(avcodec_open2(pCodecCtx_Video, pCodec_Video, NULL) < 0)
    {
        printf("Could not open codec.(无法打开解码器)\n");
        return -1;
    }

    av_dump_format(pFormatCtx_Video, 0, NULL, 0);

    img_convert_ctx = sws_getContext(pCodecCtx_Video->width, pCodecCtx_Video->height, pCodecCtx_Video->pix_fmt, 
        pCodecCtx_Video->width, pCodecCtx_Video->height, PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL); 

    frame_size = avpicture_get_size(pCodecCtx_Video->pix_fmt, pCodecCtx_Video->width, pCodecCtx_Video->height);
    fifo_video = av_fifo_alloc(30 * avpicture_get_size(AV_PIX_FMT_YUV420P, pCodecCtx_Video->width, pCodecCtx_Video->height));

    return 0;
}

static char *dup_wchar_to_utf8(wchar_t *w)
{
    char *s = NULL;
    int l = WideCharToMultiByte(CP_UTF8, 0, w, -1, 0, 0, 0, 0);
    s = (char *) av_malloc(l);
    if (s)
        WideCharToMultiByte(CP_UTF8, 0, w, -1, s, l, 0, 0);
    return s;
}

int OpenAudioCapture()
{
    AVInputFormat *pAudioInputFmt = av_find_input_format("dshow");
    char * psDevName = dup_wchar_to_utf8(L"audio=virtual-audio-capturer");

    if (avformat_open_input(&pFormatCtx_Audio, psDevName, pAudioInputFmt,NULL) < 0)
    {
        printf("Couldn't open input stream.(无法打开音频输入流)\n");
        return -1;
    }

    if(avformat_find_stream_info(pFormatCtx_Audio,NULL)<0)  
        return -1; 

    if(pFormatCtx_Audio->streams[0]->codec->codec_type != AVMEDIA_TYPE_AUDIO)
    {
        printf("Couldn't find video stream information.(无法获取音频流信息)\n");
        return -1;
    }

    AVCodec *tmpCodec = avcodec_find_decoder(pFormatCtx_Audio->streams[0]->codec->codec_id);
    if(0 > avcodec_open2(pFormatCtx_Audio->streams[0]->codec, tmpCodec, NULL))
    {
        printf("can not find or open audio decoder!\n");
    }

    av_dump_format(pFormatCtx_Audio, 0, NULL, 0);

    return 0;
}

int OpenOutPut()
{
    AVStream *pVideoStream = NULL, *pAudioStream = NULL;
    const char *outFileName = "test.mp4";
    avformat_alloc_output_context2(&pFormatCtx_Out, NULL, NULL, outFileName);

    if (pFormatCtx_Video->streams[0]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
    {
        VideoIndex = 0;
        pVideoStream = avformat_new_stream(pFormatCtx_Out, NULL);
        if (!pVideoStream)
        {
            printf("can not new stream for output!\n");
            return -1;
        }

        outVideoCodecCtx = avcodec_alloc_context3(outAVCodec);
        if ( !outVideoCodecCtx )
        {
            printf("Error : avcodec_alloc_context3()\n");
            return -1;
        }

        //set codec context param
        outVideoCodecCtx = pVideoStream->codec;
        outVideoCodecCtx->codec_id = AV_CODEC_ID_MPEG4;
        outVideoCodecCtx->width = pFormatCtx_Video->streams[0]->codec->width;
        outVideoCodecCtx->height = pFormatCtx_Video->streams[0]->codec->height;
        outVideoCodecCtx->time_base = pFormatCtx_Video->streams[0]->codec->time_base;
        outVideoCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P;
        outVideoCodecCtx->codec_type = AVMEDIA_TYPE_VIDEO;

        if (codec_id == AV_CODEC_ID_H264)
        {
            av_opt_set(outVideoCodecCtx->priv_data, "preset", "slow", 0);
        }

        outAVCodec = avcodec_find_encoder(AV_CODEC_ID_MPEG4);
        if( !outAVCodec )
        {
            printf("\n\nError : avcodec_find_encoder()");
            return -1;
        }
        if (pFormatCtx_Out->oformat->flags & AVFMT_GLOBALHEADER)
            outVideoCodecCtx->flags |=CODEC_FLAG_GLOBAL_HEADER;

        if ((avcodec_open2(outVideoCodecCtx,outAVCodec, NULL)) < 0)
        {
            printf("can not open the encoder\n");
            return -1;
        }
    }

    if(pFormatCtx_Audio->streams[0]->codec->codec_type == AVMEDIA_TYPE_AUDIO)
    {
        AVCodecContext *pOutputCodecCtx;
        AudioIndex = 1;
        pAudioStream = avformat_new_stream(pFormatCtx_Out, NULL);

        pAudioStream->codec->codec = avcodec_find_encoder(pFormatCtx_Out->oformat->audio_codec);

        pOutputCodecCtx = pAudioStream->codec;

        pOutputCodecCtx->sample_rate = pFormatCtx_Audio->streams[0]->codec->sample_rate;
        pOutputCodecCtx->channel_layout = pFormatCtx_Out->streams[0]->codec->channel_layout;
        pOutputCodecCtx->channels = av_get_channel_layout_nb_channels(pAudioStream->codec->channel_layout);
        if(pOutputCodecCtx->channel_layout == 0)
        {
            pOutputCodecCtx->channel_layout = AV_CH_LAYOUT_STEREO;
            pOutputCodecCtx->channels = av_get_channel_layout_nb_channels(pOutputCodecCtx->channel_layout);

        }
        pOutputCodecCtx->sample_fmt = pAudioStream->codec->codec->sample_fmts[0];
        AVRational time_base={1, pAudioStream->codec->sample_rate};
        pAudioStream->time_base = time_base;
        //audioCodecCtx->time_base = time_base;

        pOutputCodecCtx->codec_tag = 0;  
        if (pFormatCtx_Out->oformat->flags & AVFMT_GLOBALHEADER)  
            pOutputCodecCtx->flags |= CODEC_FLAG_GLOBAL_HEADER;

        if (avcodec_open2(pOutputCodecCtx, pOutputCodecCtx->codec, 0) < 0)
        {
            printf("编码器打开失败,退出程序\n");
            return -1;
        }
    }

    if (!(pFormatCtx_Out->oformat->flags & AVFMT_NOFILE))
    {
        if(avio_open(&pFormatCtx_Out->pb, outFileName, AVIO_FLAG_WRITE) < 0)
        {
            printf("can not open output file handle!\n");
            return -1;
        }
    }

    if(avformat_write_header(pFormatCtx_Out, NULL) < 0)
    {
        printf("can not write the header of the output file!\n");
        return -1;
    }

    return 0;
}

int _tmain(int argc, _TCHAR* argv[])
{
    av_register_all();
    avdevice_register_all();
    if (OpenVideoCapture() < 0)
    {
        return -1;
    }
    if (OpenAudioCapture() < 0)
    {
        return -1;
    }
    if (OpenOutPut() < 0)
    {
        return -1;
    }
//  int fps;
    /*printf("输入帧率:");
    scanf_s("%d",&fps);
    if ( NULL == fps)
    {
        fps = 10;
    }*/

    InitializeCriticalSection(&VideoSection);
    InitializeCriticalSection(&AudioSection);

    AVFrame *picture = av_frame_alloc();
    int size = avpicture_get_size(pFormatCtx_Out->streams[VideoIndex]->codec->pix_fmt, 
        pFormatCtx_Out->streams[VideoIndex]->codec->width, pFormatCtx_Out->streams[VideoIndex]->codec->height);
    picture_buf = new uint8_t[size];

    avpicture_fill((AVPicture *)picture, picture_buf, 
        pFormatCtx_Out->streams[VideoIndex]->codec->pix_fmt, 
        pFormatCtx_Out->streams[VideoIndex]->codec->width, 
        pFormatCtx_Out->streams[VideoIndex]->codec->height);



    //star cap screen thread
    CreateThread( NULL, 0, ScreenCapThreadProc, 0, 0, NULL);
    //star cap audio thread
    CreateThread( NULL, 0, AudioCapThreadProc, 0, 0, NULL);
    int64_t cur_pts_v=0,cur_pts_a=0;
    int VideoFrameIndex = 0, AudioFrameIndex = 0;

    while(1)
    {
        if (_kbhit() != 0 && bCap)
        {
            bCap = false;
            Sleep(2000);
        }
        if (fifo_audio && fifo_video)
        {
            int sizeAudio = av_audio_fifo_size(fifo_audio);
            int sizeVideo = av_fifo_size(fifo_video);
            //缓存数据写完就结束循环
            if (av_audio_fifo_size(fifo_audio) <= pFormatCtx_Out->streams[AudioIndex]->codec->frame_size && 
                av_fifo_size(fifo_video) <= frame_size && !bCap)
            {
                break;
            }
        }

        if(av_compare_ts(cur_pts_v, pFormatCtx_Out->streams[VideoIndex]->time_base, 
                         cur_pts_a,pFormatCtx_Out->streams[AudioIndex]->time_base) <= 0)
        {
            if (av_fifo_size(fifo_video) < frame_size && !bCap)
            {
                cur_pts_v = 0x7fffffffffffffff;
            }
            if(av_fifo_size(fifo_video) >= size)
            {
                EnterCriticalSection(&VideoSection);
                av_fifo_generic_read(fifo_video, picture_buf, size, NULL); //将数据从avfifobuffer馈送到用户提供的回调。
                LeaveCriticalSection(&VideoSection);

                avpicture_fill((AVPicture *)picture, picture_buf,
                    pFormatCtx_Out->streams[VideoIndex]->codec->pix_fmt,
                    pFormatCtx_Out->streams[VideoIndex]->codec->width,
                    pFormatCtx_Out->streams[VideoIndex]->codec->height); //根据指定的图像参数和提供的图像数据缓冲区设置图片字段。

                //pts = n * ((1 / timbase)/ fps);
                //picture->pts = VideoFrameIndex * ((pFormatCtx_Video->streams[0]->time_base.den / pFormatCtx_Video->streams[0]->time_base.num) / 24);
                picture->pts = VideoFrameIndex * ((outVideoCodecCtx->time_base.den * 100000 / outVideoCodecCtx->time_base.num) / 180);

                int got_picture = 0;
                AVPacket pkt;
                av_init_packet(&pkt);

                pkt.data = NULL;
                pkt.size = 0;
                //从帧中获取输入的原始视频数据
                int ret = avcodec_encode_video2(pFormatCtx_Out->streams[VideoIndex]->codec, &pkt, picture, &got_picture);
                if(ret < 0)
                {
                    continue;
                }

                if (got_picture==1)
                {
                    pkt.stream_index = VideoIndex;
                    /*int count = 1;
                    pkt.pts = pkt.dts = count * ((pFormatCtx_Video->streams[0]->time_base.den / pFormatCtx_Video->streams[0]->time_base.num) / 15);
                    count++;*/

                    //x = pts * (timebase1.num / timebase1.den )* (timebase2.den / timebase2.num);

                    pkt.pts = av_rescale_q_rnd(pkt.pts, pFormatCtx_Video->streams[0]->time_base, 
                        pFormatCtx_Out->streams[VideoIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX));  
                    pkt.dts = av_rescale_q_rnd(pkt.dts,  pFormatCtx_Video->streams[0]->time_base, 
                        pFormatCtx_Out->streams[VideoIndex]->time_base, (AVRounding)(AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX)); 


                    pkt.duration = ((pFormatCtx_Out->streams[0]->time_base.den / pFormatCtx_Out->streams[0]->time_base.num) / 60);
                    //pkt.duration = 1000/60;
                    //pkt.pts = pkt.dts = Count * (ofmt_ctx->streams[stream_index]->time_base.den) /ofmt_ctx->streams[stream_index]->time_base.num / 10;

                    //Count++;


                    cur_pts_v = pkt.pts;

                    ret = av_interleaved_write_frame(pFormatCtx_Out, &pkt);
                    //delete[] pkt.data;
                    av_free_packet(&pkt);
                }
                VideoFrameIndex++;
            }
        }
        else
        {
            if (NULL == fifo_audio)
            {
                continue;//还未初始化fifo
            }
            if (av_audio_fifo_size(fifo_audio) < pFormatCtx_Out->streams[AudioIndex]->codec->frame_size && !bCap)
            {
                cur_pts_a = 0x7fffffffffffffff;
            }
            if(av_audio_fifo_size(fifo_audio) >= 
                (pFormatCtx_Out->streams[AudioIndex]->codec->frame_size > 0 ? pFormatCtx_Out->streams[AudioIndex]->codec->frame_size : 1024))
            {
                AVFrame *frame;
                frame = av_frame_alloc();
                frame->nb_samples = pFormatCtx_Out->streams[AudioIndex]->codec->frame_size>0 ? pFormatCtx_Out->streams[AudioIndex]->codec->frame_size: 1024;
                frame->channel_layout = pFormatCtx_Out->streams[AudioIndex]->codec->channel_layout;
                frame->format = pFormatCtx_Out->streams[AudioIndex]->codec->sample_fmt;
                frame->sample_rate = pFormatCtx_Out->streams[AudioIndex]->codec->sample_rate;
                av_frame_get_buffer(frame, 0);

                EnterCriticalSection(&AudioSection);
                av_audio_fifo_read(fifo_audio, (void **)frame->data, 
                    (pFormatCtx_Out->streams[AudioIndex]->codec->frame_size > 0 ? pFormatCtx_Out->streams[AudioIndex]->codec->frame_size : 1024));
                LeaveCriticalSection(&AudioSection);

                AVPacket pkt_out;
                av_init_packet(&pkt_out);
                int got_picture = -1;
                pkt_out.data = NULL;
                pkt_out.size = 0;

                frame->pts = AudioFrameIndex * pFormatCtx_Out->streams[AudioIndex]->codec->frame_size;
                if (avcodec_encode_audio2(pFormatCtx_Out->streams[AudioIndex]->codec, &pkt_out, frame, &got_picture) < 0)
                {
                    printf("can not decoder a frame");
                }
                av_frame_free(&frame);
                if (got_picture) 
                {
                    pkt_out.stream_index = AudioIndex;
                    pkt_out.pts = AudioFrameIndex * pFormatCtx_Out->streams[AudioIndex]->codec->frame_size;
                    pkt_out.dts = AudioFrameIndex * pFormatCtx_Out->streams[AudioIndex]->codec->frame_size;
                    pkt_out.duration = pFormatCtx_Out->streams[AudioIndex]->codec->frame_size;

                    cur_pts_a = pkt_out.pts;

                    int ret = av_interleaved_write_frame(pFormatCtx_Out, &pkt_out);
                    av_free_packet(&pkt_out);
                }
                AudioFrameIndex++;
            }
        }
    }

    delete[] picture_buf;

    av_fifo_free(fifo_video);
    av_audio_fifo_free(fifo_audio);

    av_write_trailer(pFormatCtx_Out);

    avio_close(pFormatCtx_Out->pb);
    avformat_free_context(pFormatCtx_Out);

    if (pFormatCtx_Video != NULL)
    {
        avformat_close_input(&pFormatCtx_Video);
        pFormatCtx_Video = NULL;
    }
    if (pFormatCtx_Audio != NULL)
    {
        avformat_close_input(&pFormatCtx_Audio);
        pFormatCtx_Audio = NULL;
    }

    return 0;
}

DWORD WINAPI ScreenCapThreadProc( LPVOID lpParam )
{
    AVPacket packet;
    int got_picture;
    AVFrame *pFrame;
    pFrame=av_frame_alloc();

    AVFrame *picture = av_frame_alloc();
    int size = avpicture_get_size(pFormatCtx_Out->streams[VideoIndex]->codec->pix_fmt, 
        pFormatCtx_Out->streams[VideoIndex]->codec->width, 
        pFormatCtx_Out->streams[VideoIndex]->codec->height);

    avpicture_fill((AVPicture *)picture, picture_buf, 
        pFormatCtx_Out->streams[VideoIndex]->codec->pix_fmt, 
        pFormatCtx_Out->streams[VideoIndex]->codec->width, 
        pFormatCtx_Out->streams[VideoIndex]->codec->height);

    FILE *p = NULL;
    p = fopen("proc_test.yuv", "wb+");
    av_init_packet(&packet);
    int height = pFormatCtx_Out->streams[VideoIndex]->codec->height;
    int width = pFormatCtx_Out->streams[VideoIndex]->codec->width;
    int y_size=height*width;
    while(bCap)
    {
        packet.data = NULL;
        packet.size = 0;
        if (av_read_frame(pFormatCtx_Video, &packet) < 0)
        {
            continue;
        }
        if(packet.stream_index == 0)
        {
            if (avcodec_decode_video2(pCodecCtx_Video, pFrame, &got_picture, &packet) < 0)
            {
                printf("Decode Error.(解码错误)\n");
                continue;
            }
            if (got_picture)
            {
                sws_scale(img_convert_ctx, 
                    (const uint8_t* const*)pFrame->data,
                    pFrame->linesize, 
                    0, 
                    pFormatCtx_Out->streams[VideoIndex]->codec->height,
                    picture->data,
                    picture->linesize);

                if (av_fifo_space(fifo_video) >= size)
                {
                    EnterCriticalSection(&VideoSection);                    
                    av_fifo_generic_write(fifo_video, picture->data[0], y_size, NULL);
                    av_fifo_generic_write(fifo_video, picture->data[1], y_size/4, NULL);
                    av_fifo_generic_write(fifo_video, picture->data[2], y_size/4, NULL);
                    LeaveCriticalSection(&VideoSection);
                }
            }
        }
        av_free_packet(&packet);
    }
    av_frame_free(&pFrame);
    av_frame_free(&picture);
    return 0;
}

DWORD WINAPI AudioCapThreadProc( LPVOID lpParam )
{
    AVPacket pkt;
    AVFrame *frame;
    frame = av_frame_alloc();
    int gotframe;
    while(bCap)
    {
        pkt.data = NULL;
        pkt.size = 0;
        if(av_read_frame(pFormatCtx_Audio,&pkt) < 0)
        {
            continue;
        }

        if (avcodec_decode_audio4(pFormatCtx_Audio->streams[0]->codec, frame, &gotframe, &pkt) < 0)
        {
            av_frame_free(&frame);
            printf("can not decoder a frame");
            break;
        }
        av_free_packet(&pkt);

        if (!gotframe)
        {
            printf("没有获取到数据,继续下一次");
            continue;
        }

        if (NULL == fifo_audio)
        {
            fifo_audio = av_audio_fifo_alloc(pFormatCtx_Audio->streams[0]->codec->sample_fmt, 
                pFormatCtx_Audio->streams[0]->codec->channels, 30 * frame->nb_samples);
        }

        int buf_space = av_audio_fifo_space(fifo_audio);
        if (av_audio_fifo_space(fifo_audio) >= frame->nb_samples)
        {
            EnterCriticalSection(&AudioSection);
            av_audio_fifo_write(fifo_audio, (void **)frame->data, frame->nb_samples);
            LeaveCriticalSection(&AudioSection);
        }
    }
    av_frame_free(&frame);
    return 0;
}


    



    Maybe there is another way to calculate PTS and DTS

    



    I hope whatever the frame rate is,video playback speed is right.Not too fast or too slow.