Recherche avancée

Médias (0)

Mot : - Tags -/organisation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (102)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Organiser par catégorie

    17 mai 2013, par

    Dans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
    Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
    Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...)

Sur d’autres sites (6435)

  • Multiple format changes in FFmpeg for thermal camera

    6 février 2023, par Greynol4

    I'm having trouble generating a command to process output from a uvc thermal camera's raw data so that it can be colorized and then output to a virtual device with the intention of streaming it over rtsp. This is on a raspberry pi 3B+ with 32bit bullseye.

    


    The original code that works perfectly for previewing it is :

    


    ffmpeg -input_format yuyv422 -video_size 256x384 -i /dev/video0 -vf 'crop=h=(ih/2):y=(ih/2)' -pix_fmt yuyv422 -f rawvideo - | ffplay -pixel_format gray16le -video_size 256x192 -f rawvideo -i - -vf 'normalize=smoothing=10, format=pix_fmts=rgb48, pseudocolor=p=inferno'


    


    Essentially what this is doing is taking the raw data, cutting the useful portion out, then piping it to ffplay where it is seen as 16bit grayscale (in this case gray16le), then it is normalized, formatted to 48 bit rgb and then a pseudocolor filter is applied.

    


    I haven't been able to get this to translate into ffmpeg-only because it throws codec errors or format errors or converts the 16bit to 10bit even though I need the 16bit. I have tried using v4l2loopback and two instances of ffmpeg in separate windows to see if I could figure out where the error was actually occuring but I suspect that is introducing more format issues that are distracting from the original problem. The closest I have been able to get is

    


    ffmpeg -input_format yuyv422 -video_size 256x384 -i /dev/video0 -vf 'crop=h=(ih/2):y=(ih/2)' -pix_fmt yuyv422 -f rawvideo /dev/video3

    


    Followed by

    


    ffmpeg -video_size 256x192 -i /dev/video3  -f rawvideo -pix_fmt gray16le -vf 'normalize=smoothing=10,format=pix_fmts=rgb48, pseudocolor=p=inferno' -f rawvideo -f v4l2 /dev/video4

    


    This results in a non colorized but somewhat useful image with certain temperatures showing as missing pixels as opposed to the command with ffplay where it shows a properly colorized stream without missing pixels.

    


    I'll include my configuration and log from the preview command but the log doesn't show errors unless I try to modify parameters and presumably mess up the syntax.

    


     ffmpeg -input_format yuyv422 -video_size 256x384 -i /dev/video0 -vf 'crop=h=(ih/2):y=(ih/2)' -pix_fmt yuyv422 -f rawvideo - | ffplay -pixel_format gray16le -video_size 256x192 -f rawvideo -i - -vf 'normalize=smoothing=10, format=pix_fmts=rgb48, pseudocolor=p=inferno'
ffplay version N-109758-gbdc76f467f Copyright (c) 2003-2023 the FFmpeg developers
  built with gcc 10 (Raspbian 10.2.1-6+rpi1)
  configuration: --prefix=/usr/local --enable-nonfree --enable-gpl --enable-hardcoded-tables --disable-ffprobe --disable-ffplay --enable-libx264 --enable-libx265 --enable-sdl --enable-sdl2 --enable-ffplay
  libavutil      57. 44.100 / 57. 44.100
  libavcodec     59. 63.100 / 59. 63.100
  libavformat    59. 38.100 / 59. 38.100
  libavdevice    59.  8.101 / 59.  8.101
  libavfilter     8. 56.100 /  8. 56.100
  libswscale      6.  8.112 /  6.  8.112
  libswresample   4.  9.100 /  4.  9.100
  libpostproc    56.  7.100 / 56.  7.100
ffmpeg version N-109758-gbdc76f467f Copyright (c) 2000-2023 the FFmpeg developers
  built with gcc 10 (Raspbian 10.2.1-6+rpi1)
  configuration: --prefix=/usr/local --enable-nonfree --enable-gpl --enable-hardcoded-tables --disable-ffprobe --disable-ffplay --enable-libx264 --enable-libx265 --enable-sdl --enable-sdl2 --enable-ffplay
  libavutil      57. 44.100 / 57. 44.100
  libavcodec     59. 63.100 / 59. 63.100
  libavformat    59. 38.100 / 59. 38.100
  libavdevice    59.  8.101 / 59.  8.101
  libavfilter     8. 56.100 /  8. 56.100
  libswscale      6.  8.112 /  6.  8.112
  libswresample   4.  9.100 /  4.  9.100
  libpostproc    56.  7.100 / 56.  7.100
Input #0, video4linux2,v4l2, from '/dev/video0':B sq=    0B f=0/0   
  Duration: N/A, start: 242.040935, bitrate: 39321 kb/s
  Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 256x384, 39321 kb/s, 25 fps, 25 tbr, 1000k tbn
Stream mapping:.000 fd=   0 aq=    0KB vq=    0KB sq=    0B f=0/0   
  Stream #0:0 -> #0:0 (rawvideo (native) -> rawvideo (native))
Press [q] to stop, [?] for help
Output #0, rawvideo, to 'pipe:':   0KB vq=    0KB sq=    0B f=0/0   
  Metadata:
    encoder         : Lavf59.38.100
  Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422(tv, progressive), 256x192, q=2-31, 19660 kb/s, 25 fps, 25 tbn
    Metadata:
      encoder         : Lavc59.63.100 rawvideo
frame=    0 fps=0.0 q=0.0 size=       0kB time=-577014:32:22.77 bitrate=  -0.0kbInput #0, rawvideo, from 'fd:':    0KB vq=    0KB sq=    0B f=0/0   
  Duration: N/A, start: 0.000000, bitrate: 19660 kb/s
  Stream #0:0: Video: rawvideo (Y1[0][16] / 0x10003159), gray16le, 256x192, 19660 kb/s, 25 tbr, 25 tbn
frame=   13 fps=0.0 q=-0.0 size=    1152kB time=00:00:00.52 bitrate=18148.4kbitsframe=   25 fps= 24 q=-0.0 size=    2304kB time=00:00:01.00 bitrate=18874.4kbitsframe=   39 fps= 25 q=-0.0 size=    3648kB time=00:00:01.56 bitrate=19156.7kbitsframe=   51 fps= 24 q=-0.0 size=    4800kB time=00:00:02.04 bitrate=19275.3kbitsframe=   64 fps= 24 q=-0.0 size=    6048kB time=00:00:02.56 bitrate=19353.6kbitsframe=   78 fps= 25 q=-0.0 size=    7392kB time=00:00:03.12 bitrate=19408.7kbits




    


    I'd also like to use the correct option so it isn't scrolling though every frame in the log as well as links to resources for adapting a command to a script for beginners even though that's outside the purview of this question so any direction on those would be much appreciated.

    


  • FFMPEG changes pixel values when reading and saving png without modification

    25 janvier 2023, par walrus

    This is a toy problem that is the result of my trying to identify a bug within a video pipeline I'm working on. The idea is that I want to take a frame from a YUV420 video, modify it as an RGB24 image, and reinsert it. To do this I convert YUV420 -> YUV444 -> RGB -> YUV444 -> YUV420. Doing this without any modification should result in the same frame however I noticed slight color transformations.

    


    I tried to isolate the problem using a toy 3x3 RGB32 png image. The function read_and_save_image reads the image and then saves it as new file. It returns the read pixel array. I run this function thrice successively using the output of the previous run as the input of the next. This is to demonstrate a perplexing fact. While passing an image through the function once causes the resulting image to have different pixel values, doing it twice does not change anything. Perhaps more confusing is that the pixel values returned by the function are all the same.

    


    tldr ; How can I load and save the toy image below using ffmpeg as a new file such that the pixel values of the new and original files are identical ?

    


    Here is the original image followed by the result from one and two passes through the function. Note that the pixel value displayed by when reading these images with Preview has changed ever so slightly. This becomes noticeable within a video.

    


    Test image (very small) ->&#xA;3x3 test image file <-

    &#xA;

    Here are the pixel values read (note that after being loaded and saved there is a change) :

    &#xA;

    original test image

    &#xA;

    test image after one pass

    &#xA;

    test image after two passes

    &#xA;

    Edit : here is an RGB24 frame extracted from a video I am using to test my pipeline. I had the same issue with pixel values changing after loading and saving with ffmpeg.

    &#xA;

    frame from video I was testing pipeline on

    &#xA;

    Here is a screenshot showing how the image is noticeably darker after ffmpeg. Same pixels on the top right corner of the image.

    &#xA;

    zoomed in top right corner

    &#xA;

    Here is the code of the toy problem :

    &#xA;

    import os&#xA;import ffmpeg&#xA;import numpy as np&#xA;&#xA;&#xA;def read_and_save_image(in_file, out_file, width, height, pix_fmt=&#x27;rgb32&#x27;):&#xA;    input_data, _ = (&#xA;        ffmpeg&#xA;        .input(in_file)&#xA;        .output(&#x27;pipe:&#x27;, format=&#x27;rawvideo&#x27;, pix_fmt=pix_fmt)&#xA;        .run(capture_stdout=True)&#xA;    )&#xA;  &#xA;    frame = np.frombuffer(input_data, np.uint8)&#xA;    print(in_file,&#x27;\n&#x27;, frame.reshape((height,width,-1)))&#xA;    &#xA;    save_data = (&#xA;        ffmpeg&#xA;            .input(&#x27;pipe:&#x27;, format=&#x27;rawvideo&#x27;, pix_fmt=pix_fmt, s=&#x27;{}x{}&#x27;.format(width, height))&#xA;            .output(out_file, pix_fmt=pix_fmt)&#xA;            .overwrite_output()&#xA;            .run_async(pipe_stdin=True)&#xA;    )&#xA;    &#xA;    &#xA;&#xA;    save_data.stdin.write(frame.tobytes())&#xA;    save_data.stdin.close()&#xA;    #save_data.wait()&#xA;&#xA;    return frame&#xA;&#xA;try:&#xA;    test_img = "test_image.png"&#xA;    test_img_1 = "test_image_1.png"&#xA;    test_img_2 = "test_image_2.png"&#xA;    test_img_3 = "test_image_3.png"&#xA;&#xA;    width, height, pix_fmt = 3,3,&#x27;rgb32&#x27;&#xA;    #width, height, pix_fmt = video_stream[&#x27;width&#x27;], video_stream[&#x27;height&#x27;],  &#x27;rgb24&#x27;&#xA;    test_img_pxls = read_and_save_image(test_img,test_img_1, width, height, pix_fmt)&#xA;    test_img_1_pxls = read_and_save_image(test_img_1,test_img_2, width, height, pix_fmt)&#xA;    test_img_2_pxls = read_and_save_image(test_img_2,test_img_3, width, height, pix_fmt)&#xA;&#xA;    print(np.array_equiv(test_img_pxls, test_img_1_pxls))&#xA;    print(np.array_equiv(test_img_1_pxls, test_img_2_pxls))&#xA;&#xA;except ffmpeg.Error as e:&#xA;    print(&#x27;stdout:&#x27;, e.stdout.decode(&#x27;utf8&#x27;))&#xA;    print(&#x27;stderr:&#x27;, e.stderr.decode(&#x27;utf8&#x27;))&#xA;    raise e&#xA;&#xA;&#xA;!mediainfo --Output=JSON --Full $test_img&#xA;!mediainfo --Output=JSON --Full $test_img_1&#xA;!mediainfo --Output=JSON --Full $test_img_2&#xA;

    &#xA;

    Here is the console output of the program that shows that the pixel arrays read by ffmpeg are the same despite the images being different.

    &#xA;

    test_image.png &#xA; [[[253 218 249 255]&#xA;  [252 213 248 255]&#xA;  [251 200 244 255]]&#xA;&#xA; [[253 227 250 255]&#xA;  [249 209 236 255]&#xA;  [243 169 206 255]]&#xA;&#xA; [[253 235 251 255]&#xA;  [245 195 211 255]&#xA;  [226 103 125 255]]]&#xA;test_image_1.png &#xA; [[[253 218 249 255]&#xA;  [252 213 248 255]&#xA;  [251 200 244 255]]&#xA;&#xA; [[253 227 250 255]&#xA;  [249 209 236 255]&#xA;  [243 169 206 255]]&#xA;&#xA; [[253 235 251 255]&#xA;  [245 195 211 255]&#xA;  [226 103 125 255]]]&#xA;test_image_2.png &#xA; [[[253 218 249 255]&#xA;  [252 213 248 255]&#xA;  [251 200 244 255]]&#xA;&#xA; [[253 227 250 255]&#xA;  [249 209 236 255]&#xA;  [243 169 206 255]]&#xA;&#xA; [[253 235 251 255]&#xA;  [245 195 211 255]&#xA;  [226 103 125 255]]]&#xA;True&#xA;True&#xA;{&#xA;"media": {&#xA;"@ref": "test_image.png",&#xA;"track": [&#xA;{&#xA;"@type": "General",&#xA;"ImageCount": "1",&#xA;"FileExtension": "png",&#xA;"Format": "PNG",&#xA;"FileSize": "4105",&#xA;"StreamSize": "0",&#xA;"File_Modified_Date": "UTC 2023-01-19 13:49:00",&#xA;"File_Modified_Date_Local": "2023-01-19 13:49:00"&#xA;},&#xA;{&#xA;"@type": "Image",&#xA;"Format": "PNG",&#xA;"Format_Compression": "LZ77",&#xA;"Width": "3",&#xA;"Height": "3",&#xA;"BitDepth": "32",&#xA;"Compression_Mode": "Lossless",&#xA;"StreamSize": "4105"&#xA;}&#xA;]&#xA;}&#xA;}&#xA;&#xA;{&#xA;"media": {&#xA;"@ref": "test_image_1.png",&#xA;"track": [&#xA;{&#xA;"@type": "General",&#xA;"ImageCount": "1",&#xA;"FileExtension": "png",&#xA;"Format": "PNG",&#xA;"FileSize": "128",&#xA;"StreamSize": "0",&#xA;"File_Modified_Date": "UTC 2023-01-24 15:31:58",&#xA;"File_Modified_Date_Local": "2023-01-24 15:31:58"&#xA;},&#xA;{&#xA;"@type": "Image",&#xA;"Format": "PNG",&#xA;"Format_Compression": "LZ77",&#xA;"Width": "3",&#xA;"Height": "3",&#xA;"BitDepth": "32",&#xA;"Compression_Mode": "Lossless",&#xA;"StreamSize": "128"&#xA;}&#xA;]&#xA;}&#xA;}&#xA;&#xA;{&#xA;"media": {&#xA;"@ref": "test_image_2.png",&#xA;"track": [&#xA;{&#xA;"@type": "General",&#xA;"ImageCount": "1",&#xA;"FileExtension": "png",&#xA;"Format": "PNG",&#xA;"FileSize": "128",&#xA;"StreamSize": "0",&#xA;"File_Modified_Date": "UTC 2023-01-24 15:31:59",&#xA;"File_Modified_Date_Local": "2023-01-24 15:31:59"&#xA;},&#xA;{&#xA;"@type": "Image",&#xA;"Format": "PNG",&#xA;"Format_Compression": "LZ77",&#xA;"Width": "3",&#xA;"Height": "3",&#xA;"BitDepth": "32",&#xA;"Compression_Mode": "Lossless",&#xA;"StreamSize": "128"&#xA;}&#xA;]&#xA;}&#xA;}&#xA;&#xA;

    &#xA;

  • ffmpeg giving Error while decoding stream #0:1 : Invalid data found when processing input

    11 octobre 2023, par user1432181

    I am trying to merge two video files into one using ffmpeg on Windows. The process has been proven to work over and over (with over 100 files merged together at some points) - but I have come across an input file that is causing the process to fail with the errors :

    &#xA;

    _[aac @ 00000142532f74c0] channel element 1.0 is not allocated&#xA;Error while decoding stream #0:1: Invalid data found when processing input&#xA;[aac @ 00000142532f74c0] channel element 1.0 is not allocated&#xA;Error while decoding stream #0:1: Invalid data found when processing input&#xA;[aac @ 00000142532f74c0] channel element 1.0 is not allocated&#xA;.&#xA;.&#xA;.&#xA;

    &#xA;

    There seems to be 3 command line steps to get here, using a concats-inputs.dat file containing :

    &#xA;

    file E:/..../snippet A.mp4&#xA;file E:/..../snippet B.mp4&#xA;

    &#xA;

    (Copies of these files can be found at https://filebin.net/77wbowvh7vbklkey/snippet_A.mp4 and https://filebin.net/77wbowvh7vbklkey/snippet_B.mp4)

    &#xA;

    Step 1 :

    &#xA;

    > ffmpeg-6.0-full_build/bin/ffmpeg -y -progress ".Default.mp4.progressinfo.dat" -vsync 0 -f concat -safe 0 -i "E:/...../concat-inputs.dat" -c:v copy -c:a copy -crf 0 -b:v 10M "E:/...../video.Default.mp4"&#xA;

    &#xA;

    with the output....

    &#xA;

    built with gcc 12.2.0 (Rev10, Built by MSYS2 project)&#xA;&#xA;  configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint&#xA;&#xA;  libavutil      58.  2.100 / 58.  2.100&#xA;&#xA;  libavcodec     60.  3.100 / 60.  3.100&#xA;&#xA;  libavformat    60.  3.100 / 60.  3.100&#xA;&#xA;  libavdevice    60.  1.100 / 60.  1.100&#xA;&#xA;  libavfilter     9.  3.100 /  9.  3.100&#xA;&#xA;  libswscale      7.  1.100 /  7.  1.100&#xA;&#xA;  libswresample   4. 10.100 /  4. 10.100&#xA;&#xA;  libpostproc    57.  1.100 / 57.  1.100&#xA;&#xA;-vsync is deprecated. Use -fps_mode&#xA;&#xA;Passing a number to -vsync is deprecated, use a string argument as described in the manual.&#xA;&#xA;[mov,mp4,m4a,3gp,3g2,mj2 @ 000001bf88ffe240] Auto-inserting h264_mp4toannexb bitstream filter&#xA;&#xA;Input #0, concat, from &#x27;E:/...../concat-inputs.dat&#x27;:&#xA;&#xA;  Duration: N/A, start: -0.010667, bitrate: 20382 kb/s&#xA;&#xA;  Stream #0:0(und): Video: h264 (High 4:4:4 Predictive) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1280x720 [SAR 1:1 DAR 16:9], 20043 kb/s, 50 fps, 50 tbr, 12800 tbn&#xA;&#xA;    Metadata:&#xA;&#xA;      handler_name    : VideoHandler&#xA;&#xA;      vendor_id       : [0][0][0][0]&#xA;&#xA;      encoder         : Lavc60.3.100 libx264&#xA;&#xA;  Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 96000 Hz, 5.1, fltp, 339 kb/s&#xA;&#xA;    Metadata:&#xA;&#xA;      handler_name    : SoundHandler&#xA;&#xA;      vendor_id       : [0][0][0][0]&#xA;&#xA;Output #0, mp4, to &#x27;E:/...../video.Default.mp4&#x27;:&#xA;&#xA;  Metadata:&#xA;&#xA;    encoder         : Lavf60.3.100&#xA;&#xA;  Stream #0:0(und): Video: h264 (High 4:4:4 Predictive) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1280x720 [SAR 1:1 DAR 16:9], q=2-31, 10000 kb/s, 50 fps, 50 tbr, 12800 tbn&#xA;&#xA;    Metadata:&#xA;&#xA;      handler_name    : VideoHandler&#xA;&#xA;      vendor_id       : [0][0][0][0]&#xA;&#xA;      encoder         : Lavc60.3.100 libx264&#xA;&#xA;  Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 96000 Hz, 5.1, fltp, 339 kb/s&#xA;&#xA;    Metadata:&#xA;&#xA;      handler_name    : SoundHandler&#xA;&#xA;      vendor_id       : [0][0][0][0]&#xA;&#xA;Stream mapping:&#xA;&#xA;  Stream #0:0 -> #0:0 (copy)&#xA;&#xA;  Stream #0:1 -> #0:1 (copy)&#xA;&#xA;Press [q] to stop, [?] for help&#xA;&#xA;frame=    0 fps=0.0 q=-1.0 size=       0kB time=00:00:00.00 bitrate=N/A speed=N/A    &#xA;_[mov,mp4,m4a,3gp,3g2,mj2 @ 000001bf890653c0] Auto-inserting h264_mp4toannexb bitstream filter&#xA;&#xA;[mp4 @ 000001bf89000580] Non-monotonous DTS in output stream 0:1; previous: 180224, current: 180192; changing to 180225. This may result in incorrect timestamps in the output file.&#xA;&#xA;frame=  210 fps=0.0 q=-1.0 Lsize=   11537kB time=00:00:04.21 bitrate=22433.7kbits/s speed=41.9x&#xA;&#xA;video:11417kB audio:114kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.053312%&#xA;

    &#xA;

    Step 2

    &#xA;

    > ffmpeg-6.0-full_build/bin/ffmpeg -y -progress ".Default.mp4.progressinfo.dat" -vsync 0 -f concat -safe 0 -i "E:/...../concat-inputs.dat" -c:v copy -c:a copy -crf 0 -b:v 10M "E:/...../audio.Default.wav"&#xA;

    &#xA;

    which outputs...

    &#xA;

    ffmpeg version 6.0-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers&#xA;&#xA;  built with gcc 12.2.0 (Rev10, Built by MSYS2 project)&#xA;&#xA;  configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint&#xA;&#xA;  libavutil      58.  2.100 / 58.  2.100&#xA;&#xA;  libavcodec     60.  3.100 / 60.  3.100&#xA;&#xA;  libavformat    60.  3.100 / 60.  3.100&#xA;&#xA;  libavdevice    60.  1.100 / 60.  1.100&#xA;&#xA;  libavfilter     9.  3.100 /  9.  3.100&#xA;&#xA;  libswscale      7.  1.100 /  7.  1.100&#xA;&#xA;  libswresample   4. 10.100 /  4. 10.100&#xA;&#xA;  libpostproc    57.  1.100 / 57.  1.100&#xA;&#xA;-vsync is deprecated. Use -fps_mode&#xA;&#xA;Passing a number to -vsync is deprecated, use a string argument as described in the manual.&#xA;&#xA;[mov,mp4,m4a,3gp,3g2,mj2 @ 00000246d314e240] Auto-inserting h264_mp4toannexb bitstream filter&#xA;&#xA;Input #0, concat, from &#x27;E:/...../concat-inputs.dat&#x27;:&#xA;&#xA;  Duration: N/A, start: -0.010667, bitrate: 20382 kb/s&#xA;&#xA;  Stream #0:0(und): Video: h264 (High 4:4:4 Predictive) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1280x720 [SAR 1:1 DAR 16:9], 20043 kb/s, 50 fps, 50 tbr, 12800 tbn&#xA;&#xA;    Metadata:&#xA;&#xA;      handler_name    : VideoHandler&#xA;&#xA;      vendor_id       : [0][0][0][0]&#xA;&#xA;      encoder         : Lavc60.3.100 libx264&#xA;&#xA;  Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 96000 Hz, 5.1, fltp, 339 kb/s&#xA;&#xA;    Metadata:&#xA;&#xA;      handler_name    : SoundHandler&#xA;&#xA;      vendor_id       : [0][0][0][0]&#xA;&#xA;[out#0/wav @ 00000246d31bd240] Codec AVOption b (set bitrate (in bits/s)) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.&#xA;&#xA;Output #0, wav, to &#x27;E:/...../audio.Default.wav&#x27;:&#xA;&#xA;  Metadata:&#xA;&#xA;    ISFT            : Lavf60.3.100&#xA;&#xA;  Stream #0:0(und): Audio: aac (LC) ([255][0][0][0] / 0x00FF), 96000 Hz, 5.1, fltp, 339 kb/s&#xA;&#xA;    Metadata:&#xA;&#xA;      handler_name    : SoundHandler&#xA;&#xA;      vendor_id       : [0][0][0][0]&#xA;&#xA;Stream mapping:&#xA;&#xA;  Stream #0:1 -> #0:0 (copy)&#xA;&#xA;Press [q] to stop, [?] for help&#xA;&#xA;size=       0kB time=00:00:00.00 bitrate=N/A speed=N/A    &#xA;_[mov,mp4,m4a,3gp,3g2,mj2 @ 00000246d3b009c0] Auto-inserting h264_mp4toannexb bitstream filter&#xA;&#xA;[wav @ 00000246d3150580] Non-monotonous DTS in output stream 0:0; previous: 180224, current: 180192; changing to 180224. This may result in incorrect timestamps in the output file.&#xA;&#xA;size=     114kB time=00:00:04.21 bitrate= 222.4kbits/s speed= 128x&#xA;&#xA;video:0kB audio:114kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.102561%&#xA;

    &#xA;

    Step 3

    &#xA;

    > ffmpeg-6.0-full_build/bin/ffmpeg -y -progress ".Default.mp4.progressinfo.dat" -i "E:/...../video.Default.mp4" -i "E:/...../audio.Default.wav" -crf 0 -c:v copy -c:a aac "E:/...../Default.mp4"&#xA;

    &#xA;

    ... which then gives the errors....

    &#xA;

    ffmpeg version 6.0-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers&#xA;&#xA;  built with gcc 12.2.0 (Rev10, Built by MSYS2 project)&#xA;&#xA;  configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint&#xA;&#xA;  libavutil      58.  2.100 / 58.  2.100&#xA;&#xA;  libavcodec     60.  3.100 / 60.  3.100&#xA;&#xA;  libavformat    60.  3.100 / 60.  3.100&#xA;&#xA;  libavdevice    60.  1.100 / 60.  1.100&#xA;&#xA;  libavfilter     9.  3.100 /  9.  3.100&#xA;&#xA;  libswscale      7.  1.100 /  7.  1.100&#xA;&#xA;  libswresample   4. 10.100 /  4. 10.100&#xA;&#xA;  libpostproc    57.  1.100 / 57.  1.100&#xA;&#xA;Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;E:/...../video.Default.mp4&#x27;:&#xA;&#xA;  Metadata:&#xA;&#xA;    major_brand     : isom&#xA;&#xA;    minor_version   : 512&#xA;&#xA;    compatible_brands: isomiso2avc1mp41&#xA;&#xA;    encoder         : Lavf60.3.100&#xA;&#xA;  Duration: 00:00:04.23, start: 0.000000, bitrate: 22359 kb/s&#xA;&#xA;  Stream #0:0[0x1](und): Video: h264 (High 4:4:4 Predictive) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1280x720 [SAR 1:1 DAR 16:9], 22178 kb/s, 49.80 fps, 50 tbr, 12800 tbn (default)&#xA;&#xA;    Metadata:&#xA;&#xA;      handler_name    : VideoHandler&#xA;&#xA;      vendor_id       : [0][0][0][0]&#xA;&#xA;      encoder         : Lavc60.3.100 libx264&#xA;&#xA;  Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 96000 Hz, 5.1, fltp, 221 kb/s (default)&#xA;&#xA;    Metadata:&#xA;&#xA;      handler_name    : SoundHandler&#xA;&#xA;      vendor_id       : [0][0][0][0]&#xA;&#xA;[aac @ 000001425315e580] Multiple frames in a packet.&#xA;&#xA;Input #1, wav, from &#x27;E:/...../audio.Default.wav&#x27;:&#xA;&#xA;  Metadata:&#xA;&#xA;    encoder         : Lavf60.3.100&#xA;&#xA;  Duration: 00:00:04.22, bitrate: 221 kb/s&#xA;&#xA;  Stream #1:0: Audio: aac (LC) ([255][0][0][0] / 0x00FF), 96000 Hz, 5.1, fltp, 339 kb/s&#xA;&#xA;Stream mapping:&#xA;&#xA;  Stream #0:0 -> #0:0 (copy)&#xA;&#xA;  Stream #0:1 -> #0:1 (aac (native) -> aac (native))&#xA;&#xA;Press [q] to stop, [?] for help&#xA;&#xA;Output #0, mp4, to &#x27;E:/...../Default.mp4&#x27;:&#xA;&#xA;  Metadata:&#xA;&#xA;    major_brand     : isom&#xA;&#xA;    minor_version   : 512&#xA;&#xA;    compatible_brands: isomiso2avc1mp41&#xA;&#xA;    encoder         : Lavf60.3.100&#xA;&#xA;  Stream #0:0(und): Video: h264 (High 4:4:4 Predictive) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1280x720 [SAR 1:1 DAR 16:9], q=2-31, 22178 kb/s, 49.80 fps, 50 tbr, 12800 tbn (default)&#xA;&#xA;    Metadata:&#xA;&#xA;      handler_name    : VideoHandler&#xA;&#xA;      vendor_id       : [0][0][0][0]&#xA;&#xA;      encoder         : Lavc60.3.100 libx264&#xA;&#xA;  Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 96000 Hz, 5.1, fltp, 341 kb/s (default)&#xA;&#xA;    Metadata:&#xA;&#xA;      handler_name    : SoundHandler&#xA;&#xA;      vendor_id       : [0][0][0][0]&#xA;&#xA;      encoder         : Lavc60.3.100 aac&#xA;&#xA;frame=    0 fps=0.0 q=-1.0 size=       0kB time=-577014:32:22.77 bitrate=  -0.0kbits/s speed=N/A    &#xA;_[aac @ 00000142532f74c0] channel element 1.0 is not allocated&#xA;&#xA;Error while decoding stream #0:1: Invalid data found when processing input&#xA;&#xA;[aac @ 00000142532f74c0] channel element 1.0 is not allocated&#xA;.&#xA;.&#xA;.&#xA;

    &#xA;

    If I was to do this to merge snippet B with snippet B then it would work - it's something about snippet A that is causing the problem.

    &#xA;

    Is there any way to get around this... what is it about snippet A that is causing a problem... and is there a way to "normalize" it so that it can be merged as part of the "set".

    &#xA;

    Note, I just upgraded to ffmpeg6 after a previous version was giving the same problems - so I will also work on the deprecated messages when I can.

    &#xA;