Recherche avancée

Médias (1)

Mot : - Tags -/artwork

Autres articles (106)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Formulaire personnalisable

    21 juin 2013, par

    Cette page présente les champs disponibles dans le formulaire de publication d’un média et il indique les différents champs qu’on peut ajouter. Formulaire de création d’un Media
    Dans le cas d’un document de type média, les champs proposés par défaut sont : Texte Activer/Désactiver le forum ( on peut désactiver l’invite au commentaire pour chaque article ) Licence Ajout/suppression d’auteurs Tags
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire. (...)

  • Qu’est ce qu’un masque de formulaire

    13 juin 2013, par

    Un masque de formulaire consiste en la personnalisation du formulaire de mise en ligne des médias, rubriques, actualités, éditoriaux et liens vers des sites.
    Chaque formulaire de publication d’objet peut donc être personnalisé.
    Pour accéder à la personnalisation des champs de formulaires, il est nécessaire d’aller dans l’administration de votre MediaSPIP puis de sélectionner "Configuration des masques de formulaires".
    Sélectionnez ensuite le formulaire à modifier en cliquant sur sont type d’objet. (...)

Sur d’autres sites (5972)

  • FFmpeg RTSP drop rate increases when frame rate is reduced

    13 avril 2024, par Avishka Perera

    I need to read an RTSP stream, process the images individually in Python, and then write the images back to an RTSP stream. As the RTSP server, I am using Mediamtx [1]. For streaming, I am using FFmpeg [2].

    


    I have the following code that works perfectly fine. For simplification purposes, I am streaming three generated images.

    


    import time
import numpy as np
import subprocess

width, height = 640, 480
fps = 25
rtsp_server_address = f"rtsp://localhost:8554/mystream"

ffmpeg_cmd = [
    "ffmpeg",
    "-re",
    "-f",
    "rawvideo",
    "-pix_fmt",
    "rgb24",
    "-s",
    f"{width}x{height}",
    "-i",
    "-",
    "-r",
    str(fps),
    "-avoid_negative_ts",
    "make_zero",
    "-vcodec",
    "libx264",
    "-threads",
    "4",
    "-f",
    "rtsp",
    rtsp_server_address,
]
colors = np.array(
    [
        [255, 0, 0],
        [0, 255, 0],
        [0, 0, 255],
    ]
).reshape(3, 1, 1, 3)
images = (np.ones((3, width, height, 3)) * colors).astype(np.uint8)

if __name__ == "__main__":

    process = subprocess.Popen(ffmpeg_cmd, stdin=subprocess.PIPE)
    start = time.time()
    exported = 0
    while True:
        exported += 1
        next_time = start + exported / fps
        now = time.time()
        if next_time > now:
            sleep_dur = next_time - now
            time.sleep(sleep_dur)

        image = images[exported % 3]
        image_bytes = image.tobytes()

        process.stdin.write(image_bytes)
        process.stdin.flush()

    process.stdin.close()
    process.wait()


    


    The issue is, that I need to run this at 10 fps because the processing step is heavy and can only afford 10 fps. Hence, as I reduce the frame rate from 25 to 10, the drop rate increases from 0% to 100%. And after a few iterations, I get a BrokenPipeError: [Errno 32] Broken pipe. Refer to the appendix for the complete log.

    


    As an alternative, I can use OpenCV compiled from source with GStreamer [3], but I prefer using FFmpeg to make the shipping process simple. Since compiling OpenCV from source can be tedious and dependent on the system.

    


    References

    


    [1] Mediamtx (formerly rtsp-simple-server) : https://github.com/bluenviron/mediamtx

    


    [2] FFmpeg : https://github.com/FFmpeg/FFmpeg

    


    [3] Compile OpenCV with GStreamer : https://github.com/bluenviron/mediamtx?tab=readme-ov-file#opencv

    


    Appendix

    


    Creating the source stream

    


    To instantiate the unprocessed stream, I use the following command. This streams the content of my webcam as and RTSP stream.

    


    ffmpeg -video_size 1280x720 -i /dev/video0  -avoid_negative_ts make_zero -vcodec libx264 -r 10 -f rtsp rtsp://localhost:8554/webcam


    


    Error log

    


    ffmpeg version 6.1.1 Copyright (c) 2000-2023 the FFmpeg developers&#xA;  built with gcc 12.3.0 (conda-forge gcc 12.3.0-5)&#xA;  configuration: --prefix=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_plac --cc=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-cc --cxx=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-c&#x2B;&#x2B; --nm=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-nm --ar=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-ar --disable-doc --disable-openssl --enable-demuxer=dash --enable-hardcoded-tables --enable-libfreetype --enable-libharfbuzz --enable-libfontconfig --enable-libopenh264 --enable-libdav1d --enable-gnutls --enable-libmp3lame --enable-libvpx --enable-libass --enable-pthreads --enable-vaapi --enable-libopenvino --enable-gpl --enable-libx264 --enable-libx265 --enable-libaom --enable-libsvtav1 --enable-libxml2 --enable-pic --enable-shared --disable-static --enable-version3 --enable-zlib --enable-libopus --pkg-config=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/pkg-config&#xA;  libavutil      58. 29.100 / 58. 29.100&#xA;  libavcodec     60. 31.102 / 60. 31.102&#xA;  libavformat    60. 16.100 / 60. 16.100&#xA;  libavdevice    60.  3.100 / 60.  3.100&#xA;  libavfilter     9. 12.100 /  9. 12.100&#xA;  libswscale      7.  5.100 /  7.  5.100&#xA;  libswresample   4. 12.100 /  4. 12.100&#xA;  libpostproc    57.  3.100 / 57.  3.100&#xA;Input #0, rawvideo, from &#x27;fd:&#x27;:&#xA;  Duration: N/A, start: 0.000000, bitrate: 184320 kb/s&#xA;  Stream #0:0: Video: rawvideo (RGB[24] / 0x18424752), rgb24, 640x480, 184320 kb/s, 25 tbr, 25 tbn&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))&#xA;[libx264 @ 0x5e2ef8b01340] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2&#xA;[libx264 @ 0x5e2ef8b01340] profile High 4:4:4 Predictive, level 2.2, 4:4:4, 8-bit&#xA;[libx264 @ 0x5e2ef8b01340] 264 - core 164 r3095 baee400 - H.264/MPEG-4 AVC codec - Copyleft 2003-2022 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=4 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=10 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00&#xA;Output #0, rtsp, to &#x27;rtsp://localhost:8554/mystream&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf60.16.100&#xA;  Stream #0:0: Video: h264, yuv444p(tv, progressive), 640x480, q=2-31, 10 fps, 90k tbn&#xA;    Metadata:&#xA;      encoder         : Lavc60.31.102 libx264&#xA;    Side data:&#xA;      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A&#xA;[vost#0:0/libx264 @ 0x5e2ef8b01080] Error submitting a packet to the muxer: Broken pipe   &#xA;[out#0/rtsp @ 0x5e2ef8afd780] Error muxing a packet&#xA;[out#0/rtsp @ 0x5e2ef8afd780] video:1kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown&#xA;frame=    1 fps=0.1 q=-1.0 Lsize=N/A time=00:00:04.70 bitrate=N/A dup=0 drop=70 speed=0.389x    &#xA;[libx264 @ 0x5e2ef8b01340] frame I:16    Avg QP: 6.00  size:   147&#xA;[libx264 @ 0x5e2ef8b01340] frame P:17    Avg QP: 9.94  size:   101&#xA;[libx264 @ 0x5e2ef8b01340] frame B:17    Avg QP: 9.94  size:    64&#xA;[libx264 @ 0x5e2ef8b01340] consecutive B-frames: 50.0%  0.0% 42.0%  8.0%&#xA;[libx264 @ 0x5e2ef8b01340] mb I  I16..4: 81.3% 18.7%  0.0%&#xA;[libx264 @ 0x5e2ef8b01340] mb P  I16..4: 52.9%  0.0%  0.0%  P16..4:  0.0%  0.0%  0.0%  0.0%  0.0%    skip:47.1%&#xA;[libx264 @ 0x5e2ef8b01340] mb B  I16..4:  0.0%  5.9%  0.0%  B16..8:  0.1%  0.0%  0.0%  direct: 0.0%  skip:94.0%  L0:56.2% L1:43.8% BI: 0.0%&#xA;[libx264 @ 0x5e2ef8b01340] 8x8 transform intra:15.4% inter:100.0%&#xA;[libx264 @ 0x5e2ef8b01340] coded y,u,v intra: 0.0% 0.0% 0.0% inter: 0.0% 0.0% 0.0%&#xA;[libx264 @ 0x5e2ef8b01340] i16 v,h,dc,p: 97%  0%  3%  0%&#xA;[libx264 @ 0x5e2ef8b01340] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu:  0%  0% 100%  0%  0%  0%  0%  0%  0%&#xA;[libx264 @ 0x5e2ef8b01340] Weighted P-Frames: Y:52.9% UV:52.9%&#xA;[libx264 @ 0x5e2ef8b01340] ref P L0: 88.9%  0.0%  0.0% 11.1%&#xA;[libx264 @ 0x5e2ef8b01340] kb/s:8.27&#xA;Conversion failed!&#xA;Traceback (most recent call last):&#xA;  File "/home/avishka/projects/read-process-stream/minimal-ffmpeg-error.py", line 58, in <module>&#xA;    process.stdin.write(image_bytes)&#xA;BrokenPipeError: [Errno 32] Broken pipe&#xA;</module>

    &#xA;

  • FFMPEG. Read frame, process it, put it to output video. Copy sound stream unchanged

    9 décembre 2016, par Andrey Smorodov

    I want to apply processing to a video clip with sound track, extract and process frame by frame and write result to output file. Number of frames, size of frame and speed remains unchanged in output clip. Also I want to keep the same audio track as I have in source.

    I can read clip, decode frames and process then using opencv. Audio packets are also writes fine. I’m stuck on forming output video stream.

    The minimal runnable code I have for now (sorry it not so short, but cant do it shorter) :

    extern "C" {
    #include <libavutil></libavutil>timestamp.h>
    #include <libavformat></libavformat>avformat.h>
    #include "libavcodec/avcodec.h"
    #include <libavutil></libavutil>opt.h>
    #include <libavdevice></libavdevice>avdevice.h>
    #include <libswscale></libswscale>swscale.h>
    }
    #include "opencv2/opencv.hpp"

    #if LIBAVCODEC_VERSION_INT &lt; AV_VERSION_INT(55,28,1)
    #define av_frame_alloc  avcodec_alloc_frame
    #endif

    using namespace std;
    using namespace cv;

    static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt, const char *tag)
    {
       AVRational *time_base = &amp;fmt_ctx->streams[pkt->stream_index]->time_base;

       char buf1[AV_TS_MAX_STRING_SIZE] = { 0 };
       av_ts_make_string(buf1, pkt->pts);
       char buf2[AV_TS_MAX_STRING_SIZE] = { 0 };
       av_ts_make_string(buf1, pkt->dts);
       char buf3[AV_TS_MAX_STRING_SIZE] = { 0 };
       av_ts_make_string(buf1, pkt->duration);

       char buf4[AV_TS_MAX_STRING_SIZE] = { 0 };
       av_ts_make_time_string(buf1, pkt->pts, time_base);
       char buf5[AV_TS_MAX_STRING_SIZE] = { 0 };
       av_ts_make_time_string(buf1, pkt->dts, time_base);
       char buf6[AV_TS_MAX_STRING_SIZE] = { 0 };
       av_ts_make_time_string(buf1, pkt->duration, time_base);

       printf("pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\n",
           buf1, buf4,
           buf2, buf5,
           buf3, buf6,
           pkt->stream_index);

    }


    int main(int argc, char **argv)
    {
       AVOutputFormat *ofmt = NULL;
       AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
       AVPacket pkt;
       AVFrame *pFrame = NULL;
       AVFrame *pFrameRGB = NULL;
       int frameFinished = 0;
       pFrame = av_frame_alloc();
       pFrameRGB = av_frame_alloc();

       const char *in_filename, *out_filename;
       int ret, i;
       in_filename = "../../TestClips/Audio Video Sync Test.mp4";
       out_filename = "out.mp4";

       // Initialize FFMPEG
       av_register_all();
       // Get input file format context
       if ((ret = avformat_open_input(&amp;ifmt_ctx, in_filename, 0, 0)) &lt; 0)
       {
           fprintf(stderr, "Could not open input file '%s'", in_filename);
           goto end;
       }
       // Extract streams description
       if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) &lt; 0)
       {
           fprintf(stderr, "Failed to retrieve input stream information");
           goto end;
       }
       // Print detailed information about the input or output format,
       // such as duration, bitrate, streams, container, programs, metadata, side data, codec and time base.
       av_dump_format(ifmt_ctx, 0, in_filename, 0);

       // Allocate an AVFormatContext for an output format.
       avformat_alloc_output_context2(&amp;ofmt_ctx, NULL, NULL, out_filename);
       if (!ofmt_ctx)
       {
           fprintf(stderr, "Could not create output context\n");
           ret = AVERROR_UNKNOWN;
           goto end;
       }

       // The output container format.
       ofmt = ofmt_ctx->oformat;

       // Allocating output streams
       for (i = 0; i &lt; ifmt_ctx->nb_streams; i++)
       {
           AVStream *in_stream = ifmt_ctx->streams[i];
           AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
           if (!out_stream)
           {
               fprintf(stderr, "Failed allocating output stream\n");
               ret = AVERROR_UNKNOWN;
               goto end;
           }
           ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
           if (ret &lt; 0)
           {
               fprintf(stderr, "Failed to copy context from input to output stream codec context\n");
               goto end;
           }
           out_stream->codec->codec_tag = 0;
           if (ofmt_ctx->oformat->flags &amp; AVFMT_GLOBALHEADER)
           {
               out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
           }
       }

       // Show output format info
       av_dump_format(ofmt_ctx, 0, out_filename, 1);

       // Open output file
       if (!(ofmt->flags &amp; AVFMT_NOFILE))
       {
           ret = avio_open(&amp;ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
           if (ret &lt; 0)
           {
               fprintf(stderr, "Could not open output file '%s'", out_filename);
               goto end;
           }
       }
       // Write output file header
       ret = avformat_write_header(ofmt_ctx, NULL);
       if (ret &lt; 0)
       {
           fprintf(stderr, "Error occurred when opening output file\n");
           goto end;
       }

       // Search for input video codec info
       AVCodec *in_codec = nullptr;
       AVCodecContext* avctx = nullptr;

       int video_stream_index = -1;
       for (int i = 0; i &lt; ifmt_ctx->nb_streams; i++)
       {
           if (ifmt_ctx->streams[i]->codec->coder_type == AVMEDIA_TYPE_VIDEO)
           {
               video_stream_index = i;
               avctx = ifmt_ctx->streams[i]->codec;
               in_codec = avcodec_find_decoder(avctx->codec_id);
               if (!in_codec)
               {
                   fprintf(stderr, "in codec not found\n");
                   exit(1);
               }
               break;
           }
       }

       // Search for output video codec info
       AVCodec *out_codec = nullptr;
       AVCodecContext* o_avctx = nullptr;

       int o_video_stream_index = -1;
       for (int i = 0; i &lt; ofmt_ctx->nb_streams; i++)
       {
           if (ofmt_ctx->streams[i]->codec->coder_type == AVMEDIA_TYPE_VIDEO)
           {
               o_video_stream_index = i;
               o_avctx = ofmt_ctx->streams[i]->codec;
               out_codec = avcodec_find_encoder(o_avctx->codec_id);
               if (!out_codec)
               {
                   fprintf(stderr, "out codec not found\n");
                   exit(1);
               }
               break;
           }
       }

       // openCV pixel format
       AVPixelFormat pFormat = AV_PIX_FMT_RGB24;
       // Data size
       int numBytes = avpicture_get_size(pFormat, avctx->width, avctx->height);
       // allocate buffer
       uint8_t *buffer = (uint8_t *)av_malloc(numBytes * sizeof(uint8_t));
       // fill frame structure
       avpicture_fill((AVPicture *)pFrameRGB, buffer, pFormat, avctx->width, avctx->height);
       // frame area
       int y_size = avctx->width * avctx->height;
       // Open input codec
       avcodec_open2(avctx, in_codec, NULL);
       // Main loop
       while (1)
       {
           AVStream *in_stream, *out_stream;
           ret = av_read_frame(ifmt_ctx, &amp;pkt);
           if (ret &lt; 0)
           {
               break;
           }
           in_stream = ifmt_ctx->streams[pkt.stream_index];
           out_stream = ofmt_ctx->streams[pkt.stream_index];
           log_packet(ifmt_ctx, &amp;pkt, "in");
           // copy packet
           pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
           pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
           pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
           pkt.pos = -1;

           log_packet(ofmt_ctx, &amp;pkt, "out");
           if (pkt.stream_index == video_stream_index)
           {
               avcodec_decode_video2(avctx, pFrame, &amp;frameFinished, &amp;pkt);
               if (frameFinished)
               {
                   struct SwsContext *img_convert_ctx;
                   img_convert_ctx = sws_getCachedContext(NULL,
                       avctx->width,
                       avctx->height,
                       avctx->pix_fmt,
                       avctx->width,
                       avctx->height,
                       AV_PIX_FMT_BGR24,
                       SWS_BICUBIC,
                       NULL,
                       NULL,
                       NULL);
                   sws_scale(img_convert_ctx,
                       ((AVPicture*)pFrame)->data,
                       ((AVPicture*)pFrame)->linesize,
                       0,
                       avctx->height,
                       ((AVPicture *)pFrameRGB)->data,
                       ((AVPicture *)pFrameRGB)->linesize);

                   sws_freeContext(img_convert_ctx);

                   // Do some image processing
                   cv::Mat img(pFrame->height, pFrame->width, CV_8UC3, pFrameRGB->data[0],false);
                   cv::GaussianBlur(img,img,Size(5,5),3);
                   cv::imshow("Display", img);
                   cv::waitKey(5);
                   // --------------------------------
                   // Transform back to initial format
                   // --------------------------------
                   img_convert_ctx = sws_getCachedContext(NULL,
                       avctx->width,
                       avctx->height,
                       AV_PIX_FMT_BGR24,
                       avctx->width,
                       avctx->height,
                       avctx->pix_fmt,
                       SWS_BICUBIC,
                       NULL,
                       NULL,
                       NULL);
                   sws_scale(img_convert_ctx,
                       ((AVPicture*)pFrameRGB)->data,
                       ((AVPicture*)pFrameRGB)->linesize,
                       0,
                       avctx->height,
                       ((AVPicture *)pFrame)->data,
                       ((AVPicture *)pFrame)->linesize);
                       // --------------------------------------------
                       // Something must be here
                       // --------------------------------------------
                       //
                       // Write fideo frame (How to write frame to output stream ?)
                       //
                       // --------------------------------------------
                        sws_freeContext(img_convert_ctx);
               }

           }
           else // write sound frame
           {
               ret = av_interleaved_write_frame(ofmt_ctx, &amp;pkt);
           }
           if (ret &lt; 0)
           {
               fprintf(stderr, "Error muxing packet\n");
               break;
           }
           // Decrease packet ref counter
           av_packet_unref(&amp;pkt);
       }
       av_write_trailer(ofmt_ctx);
    end:
       avformat_close_input(&amp;ifmt_ctx);
       // close output
       if (ofmt_ctx &amp;&amp; !(ofmt->flags &amp; AVFMT_NOFILE))
       {
           avio_closep(&amp;ofmt_ctx->pb);
       }
       avformat_free_context(ofmt_ctx);
       if (ret &lt; 0 &amp;&amp; ret != AVERROR_EOF)
       {
           char buf_err[AV_ERROR_MAX_STRING_SIZE] = { 0 };
           av_make_error_string(buf_err, AV_ERROR_MAX_STRING_SIZE, ret);
           fprintf(stderr, "Error occurred: %s\n", buf_err);
           return 1;
       }

       avcodec_close(avctx);
       av_free(pFrame);
       av_free(pFrameRGB);

       return 0;
    }
  • Convert videos from .264 to .265 (HEVC) with ffmpeg [closed]

    11 août 2024, par John Terragnoli

    I see that there are a few questions on this subject but I am still getting errors. All I want to do is convert videos in my library to HEVC so they take up less space.
    &#xA;I've tried this :

    &#xA;&#xA;

    ffmpeg -i input.mp4 -c:v libx265 output.mp4&#xA;

    &#xA;&#xA;

    ffmpeg seems to take a long time and the output seems to be about the right size. The video will play with VLC but the icon is weird and when I try to open it with QuickTime, I get the error : 'The document “output.mov” could not be opened. The file isn’t compatible with QuickTime Player.'

    &#xA;&#xA;

    I don't want to change any of the fancy settings. I just want the files to take up less space and with minimal or no quality loss.

    &#xA;&#xA;

    Thanks !

    &#xA;&#xA;

    EDIT : &#xA;Having trouble keeping the time stamp that I put into the videos.
    &#xA;Originally I was using exiftool in terminal. But, sometimes that doesn’t work with videos, so I would airdrop them to my iPhone, use an app called Metapho to change the dates, and then airdrop them back. Exiftool was create but sometimes I just wouldn’t work. It would change the date to something like 1109212 Aug 2nd. Weird. Bottom line is that when I do these conversions, I really don’t want lose the time stamps in them.

    &#xA;&#xA;

    ORIGINAL FILE THAT I TIMESTAMPED, IN .264

    &#xA;&#xA;

    ffmpeg version 4.2.1 Copyright (c) 2000-2019 the FFmpeg developers&#xA;  built with Apple clang version 11.0.0 (clang-1100.0.33.8)&#xA;  configuration: --prefix=/usr/local/Cellar/ffmpeg/4.2.1_2 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags=&#x27;-I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.jdk/Contents/Home/include -I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.jdk/Contents/Home/include/darwin -fno-stack-check&#x27; --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr --enable-videotoolbox --disable-libjack --disable-indev=jack&#xA;  libavutil      56. 31.100 / 56. 31.100&#xA;  libavcodec     58. 54.100 / 58. 54.100&#xA;  libavformat    58. 29.100 / 58. 29.100&#xA;  libavdevice    58.  8.100 / 58.  8.100&#xA;  libavfilter     7. 57.100 /  7. 57.100&#xA;  libavresample   4.  0.  0 /  4.  0.  0&#xA;  libswscale      5.  5.100 /  5.  5.100&#xA;  libswresample   3.  5.100 /  3.  5.100&#xA;  libpostproc    55.  5.100 / 55.  5.100&#xA;Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;test_original.mov&#x27;:&#xA;  Metadata:&#xA;    major_brand     : qt  &#xA;    minor_version   : 0&#xA;    compatible_brands: qt  &#xA;    creation_time   : 2019-10-22T18:48:43.000000Z&#xA;    encoder         : HandBrake 0.10.2 2015060900&#xA;    com.apple.quicktime.creationdate: 1994-12-25T18:00:00Z&#xA;  Duration: 00:01:21.27, start: 0.000000, bitrate: 800 kb/s&#xA;    Chapter #0:0: start 0.000000, end 81.265000&#xA;    Metadata:&#xA;      title           : Chapter 12&#xA;    Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p(tv, smpte170m/smpte170m/bt709, progressive), 710x482 [SAR 58409:65535 DAR 1043348:794715], 634 kb/s, SAR 9172:10291 DAR 404229:307900, 29.95 fps, 29.97 tbr, 90k tbn, 180k tbc (default)&#xA;    Metadata:&#xA;      creation_time   : 2019-10-22T18:48:43.000000Z&#xA;      handler_name    : Core Media Video&#xA;      encoder         : &#x27;avc1&#x27;&#xA;    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 160 kb/s (default)&#xA;    Metadata:&#xA;      creation_time   : 2019-10-22T18:48:43.000000Z&#xA;      handler_name    : Core Media Audio&#xA;    Stream #0:2(und): Data: bin_data (text / 0x74786574), 0 kb/s&#xA;    Metadata:&#xA;      creation_time   : 2019-10-22T18:48:43.000000Z&#xA;      handler_name    : Core Media Text&#xA;At least one output file must be specified&#xA;

    &#xA;&#xA;

    FILE CONVERTED TO HEVC, WITHOUT -COPYTS TAG

    &#xA;&#xA;

    ffmpeg version 4.2.1 Copyright (c) 2000-2019 the FFmpeg developers&#xA;  built with Apple clang version 11.0.0 (clang-1100.0.33.8)&#xA;  configuration: --prefix=/usr/local/Cellar/ffmpeg/4.2.1_2 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags=&#x27;-I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.jdk/Contents/Home/include -I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.jdk/Contents/Home/include/darwin -fno-stack-check&#x27; --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr --enable-videotoolbox --disable-libjack --disable-indev=jack&#xA;  libavutil      56. 31.100 / 56. 31.100&#xA;  libavcodec     58. 54.100 / 58. 54.100&#xA;  libavformat    58. 29.100 / 58. 29.100&#xA;  libavdevice    58.  8.100 / 58.  8.100&#xA;  libavfilter     7. 57.100 /  7. 57.100&#xA;  libavresample   4.  0.  0 /  4.  0.  0&#xA;  libswscale      5.  5.100 /  5.  5.100&#xA;  libswresample   3.  5.100 /  3.  5.100&#xA;  libpostproc    55.  5.100 / 55.  5.100&#xA;Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;test_original_HEVC.mov&#x27;:&#xA;  Metadata:&#xA;    major_brand     : qt  &#xA;    minor_version   : 512&#xA;    compatible_brands: qt  &#xA;    encoder         : Lavf58.29.100&#xA;  Duration: 00:01:21.30, start: 0.000000, bitrate: 494 kb/s&#xA;    Chapter #0:0: start 0.000000, end 81.265000&#xA;    Metadata:&#xA;      title           : Chapter 12&#xA;    Stream #0:0: Video: hevc (Main) (hvc1 / 0x31637668), yuv420p(tv, progressive), 710x482 [SAR 9172:10291 DAR 404229:307900], 356 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 29.97 tbc (default)&#xA;    Metadata:&#xA;      handler_name    : Core Media Video&#xA;      encoder         : Lavc58.54.100 libx265&#xA;    Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 128 kb/s (default)&#xA;    Metadata:&#xA;      handler_name    : Core Media Audio&#xA;    Stream #0:2(eng): Data: bin_data (text / 0x74786574), 0 kb/s&#xA;    Metadata:&#xA;      handler_name    : SubtitleHandler&#xA;At least one output file must be specified&#xA;

    &#xA;&#xA;

    FILE CONVERTED TO HEVC, WITH -COPYTS TAG

    &#xA;&#xA;

    ffmpeg version 4.2.1 Copyright (c) 2000-2019 the FFmpeg developers&#xA;  built with Apple clang version 11.0.0 (clang-1100.0.33.8)&#xA;  configuration: --prefix=/usr/local/Cellar/ffmpeg/4.2.1_2 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags=&#x27;-I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.jdk/Contents/Home/include -I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.jdk/Contents/Home/include/darwin -fno-stack-check&#x27; --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr --enable-videotoolbox --disable-libjack --disable-indev=jack&#xA;  libavutil      56. 31.100 / 56. 31.100&#xA;  libavcodec     58. 54.100 / 58. 54.100&#xA;  libavformat    58. 29.100 / 58. 29.100&#xA;  libavdevice    58.  8.100 / 58.  8.100&#xA;  libavfilter     7. 57.100 /  7. 57.100&#xA;  libavresample   4.  0.  0 /  4.  0.  0&#xA;  libswscale      5.  5.100 /  5.  5.100&#xA;  libswresample   3.  5.100 /  3.  5.100&#xA;  libpostproc    55.  5.100 / 55.  5.100&#xA;Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;test_original_HEVC_keepts.mov&#x27;:&#xA;  Metadata:&#xA;    major_brand     : qt  &#xA;    minor_version   : 512&#xA;    compatible_brands: qt  &#xA;    encoder         : Lavf58.29.100&#xA;  Duration: 00:01:21.30, start: 0.000000, bitrate: 494 kb/s&#xA;    Chapter #0:0: start 0.000000, end 81.265000&#xA;    Metadata:&#xA;      title           : Chapter 12&#xA;    Stream #0:0: Video: hevc (Main) (hvc1 / 0x31637668), yuv420p(tv, progressive), 710x482 [SAR 9172:10291 DAR 404229:307900], 356 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 29.97 tbc (default)&#xA;    Metadata:&#xA;      handler_name    : Core Media Video&#xA;      encoder         : Lavc58.54.100 libx265&#xA;    Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 128 kb/s (default)&#xA;    Metadata:&#xA;      handler_name    : Core Media Audio&#xA;    Stream #0:2(eng): Data: bin_data (text / 0x74786574), 0 kb/s&#xA;    Metadata:&#xA;      handler_name    : SubtitleHandler&#xA;At least one output file must be specified&#xA;

    &#xA;