Recherche avancée

Médias (0)

Mot : - Tags -/presse-papier

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (39)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (6208)

  • Cutting and rejoining videos causes audio de-synchronization using ffmpeg [closed]

    10 décembre 2023, par Donotalo

    This is the original post. I'm not getting any answer there so I thought may be the programming community knows the answer.

    


    Main Post

    


    I'm trying to cut a video into pieces and rejoin them using ffmpeg on Windows 11 x64. Here's the details of ffmpeg :

    


    ffmpeg version 2023-11-22-git-0008e1c5d5-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers
  built with gcc 12.2.0 (Rev10, Built by MSYS2 project)
  configuration: --enable-gpl --enable-version3 --enable-static --pkg-config=pkgconf --disable-w32threads
--disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp
--enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt
--enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2
--enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d
--enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265
--enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx
--enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi
--enable-libharfbuzz --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg
--enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc
--enable-dxva2 --enable-d3d11va --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo
--enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt
--enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame
--enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb
--enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite
--enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint
  libavutil      58. 32.100 / 58. 32.100
  libavcodec     60. 34.100 / 60. 34.100
  libavformat    60. 17.100 / 60. 17.100
  libavdevice    60.  4.100 / 60.  4.100
  libavfilter     9. 13.100 /  9. 13.100
  libswscale      7.  6.100 /  7.  6.100
  libswresample   4. 13.100 /  4. 13.100
  libpostproc    57.  4.100 / 57.  4.100


    


    I've a video file in MKV format. I've cut it into 3 pieces using the following command :

    


    ffmpeg.exe -ss 0:0:0 -i input.mkv -t 0:54:15 -c:v hevc_amf -b:v 3M -c:s mov_text seg-01.mp4
ffmpeg.exe -ss 0:54:29 -i input.mkv -t 0:35:35 -c:v hevc_amf -b:v 3M -c:s mov_text seg-02.mp4
ffmpeg.exe -ss 1:30:12 -i input.mkv -t 0:4:10 -c:v hevc_amf -b:v 3M -c:s mov_text seg-03.mp4


    


    The audio is copied in all 3 pieces like original. Now I'm joining them using the following command :

    


    ffmpeg.exe -y -f concat -safe 0 -i .\join.txt -c:v hevc_amf -c:a copy -c:s copy -fflags +genpts out.mp4


    


    where, join.txt is :

    


    file seg-01.mp4
file seg-02.mp4
file seg-03.mp4


    


    ffmpeg throws the following warning :

    


    [mp4 @ 000002a5cf9cc040] Non-monotonic DTS in output stream 0:1; previous: 156240896, current: 156240084; changing to 156240897.
This may result in incorrect timestamps in the output file.
[mp4 @ 000002a5cf9cc040] Non-monotonic DTS in output stream 0:1; previous: 258720980, current: 258720462; changing to 258720981.
This may result in incorrect timestamps in the output file.


    


    I've observed that audio is de-synchronized after the places where I cut.

    


    How to keep audio, video and subtitle synchronized after cutting and rejoining the video using ffmpeg ?

    


  • Is there problem with 'read' command in Bash, or in Bash itself when using multiprocessing, or may be I make some mistake ? [duplicate]

    27 janvier 2023, par myQs

    First to mention that I do not have lot of experiences with Bash scripting.
    
Here is the problem that I observe :

    
When I execute read command and inside the cycle I run background processes, the read command misses some of the arguments in some very rare cases.

    
For example : if I read the output of ls -la for big number of video files and on each of them I execute ffmpeg command in a different sub-process, then in some very rare cases there are missing some of the first parameters of read command.
    
In that case the rest of parameters of the ls are wrong (having partial of their real values or wrongly assigned).

    
I most of the cases I have an output like this (which is correct) :
    
p1: '-rwxr-x---.'; p2: '1'; p3: 'uman'; p4: 'uman'; p5: '1080519'; p6: 'Jan'; p7: '27'; p8: '05:49'; p9: 'origVideo_453.mp4'

    


    but for very few lines I have not correct output and it is like this :
    
p1: 'an'; p2: '1080519'; p3: 'Jan'; p4: '27'; p5: '05:49'; p6: 'origVideo_454.mp4'; p7: ''; p8: ''; p9: ''

    


    Here p1 and p2 are missing and p3 should be "uman" but is just "an". And p3 becomes p1, p4 becomes p2, etc, in this way p7, p8 and p9 remain without values.

    


    Here is my bash script :

    


    #!/bin/bash

#src_dir=/tmp/text_files
src_dir=/tmp/video_files

dest_dir=/tmp/video_files_dest

mkdir -p $dest_dir

handle_video() {
  echo "handling file: '$1'"
  ffmpeg -loglevel error -i $src_dir/$1  -acodec copy -vcodec copy $dest_dir/$1
}

generate_text() {
  str=''
  for k in {1..512}
  do
    random=$(openssl rand -hex 20)
    str="${str}${random} "

    if [ $(( $k % 4 )) -eq 0 ]; then
      str="${str} ${new_line}"
    fi
  done

  echo "${str}" > $src_dir/$1
}

while read p1 p2 p3 p4 p5 p6 p7 p8 p9; do
echo "p1: '$p1'; p2: '$p2'; p3: '$p3'; p4: '$p4'; p5: '$p5'; p6: '$p6'; p7: '$p7'; p8: '$p8'; p9: '$p9'"
  if test -f $src_dir/$p9; then
    handle_video $p9 &
#    generate_text $p9 &
  fi
done << EOF
$(ls -la $src_dir)
EOF


    

    
**When I run the `handle_video` not in background but in same thread** I do not have such problem (remove `&` from line 33).

    
First I thought the issue might be in the output of the command `la -ls` and I tried with other commands, but I saw the same kind of results - in most of the executions `read` has correct parameters but in very few cases they are wrong.

    


    I also tried the script instead with handle_video (which uses ffmpeg invocation) to run different function that is executed inside read cycle : the generate_text.
    
To do this I comment lines 3 and 33 and uncomment lines 4 and 34.
    
And the interesting thing is that when executing it with handle_video problem exists but when executing with generate_text there is no such problem at all. At least I have never observed it in all my tests.
    
When executing it with handle_video I put 1200 video .mp4 files (1.1 MB each) in directory /tmp/video_files and run bash script.

    
When executing it with generate_text I generate 1200 empty files in directory /tmp/text_files and run the bash script.

    


    I also tried to execute the read command with piping like this, but the result is the same :

    


    ls -la $src_dir | while read p1 p2 p3 p4 p5 p6 p7 p8 p9; do
  echo "p1: '$p1'; p2: '$p2'; p3: '$p3'; p4: '$p4'; p5: '$p5'; p6: '$p6'; p7: '$p7'; p8: '$p8'; p9: '$p9'"
  if test -f $src_dir/$p9; then
    handle_video $p9 &
#   generate_text $p9 &
  fi
done


    

    


    Bash version is : 5.2.15(1)-release
    
ffmpeg version 5.0.2
    
Guest OS : Fedora version : "36 (Workstation Edition)"
    
VirtualBox 7.0.2
    
Host OS : is Windows 10 version : 21H2
    


    


    Once again when I do not run the function handle_video in background (at line 33 remove the ampersand &) there are no problems.
    
And when I use instead of handle_video the function generate_text again there are no problems.

    
So I wonder is there problem in the read method and how it gets the arguments, or is there problem with bash how it is being executing multiple processes, or there is something that I do not understand.
    
Any help and tips are appreciated.

    


    Here is a snippet of real output :

    


    p1: '-rwxr-x---.'; p2: '1'; p3: 'uman'; p4: 'uman'; p5: '1080519'; p6: 'Jan'; p7: '27'; p8: '05:49'; p9: 'origVideo_453.mp4'
handling file: 'origVideo_448.mp4'
handling file: 'origVideo_449.mp4'
handling file: 'origVideo_44.mp4'
handling file: 'origVideo_450.mp4'
handling file: 'origVideo_451.mp4'
handling file: 'origVideo_452.mp4'
p1: 'an'; p2: '1080519'; p3: 'Jan'; p4: '27'; p5: '05:49'; p6: 'origVideo_454.mp4'; p7: ''; p8: ''; p9: ''
p1: '-rwxr-x---.'; p2: '1'; p3: 'uman'; p4: 'uman'; p5: '1080519'; p6: 'Jan'; p7: '27'; p8: '05:49'; p9: 'origVideo_455.mp4'
p1: '-rwxr-x---.'; p2: '1'; p3: 'uman'; p4: 'uman'; p5: '1080519'; p6: 'Jan'; p7: '27'; p8: '05:49'; p9: 'origVideo_456.mp4'


    


  • Managing Music Playback Channels

    30 juin 2013, par Multimedia Mike — General

    My Game Music Appreciation site allows users to interact with old video game music by toggling various channels, as long as the underlying synthesizer engine supports it.


    5 NES voices

    Users often find their way to the Nintendo DS section pretty quickly. This is when they notice an obnoxious quirk with the channel toggling feature : specifically, one channel doesn’t seem to map to a particular instrument or track.

    When it comes to computer music playback methodologies, I have long observed that there are 2 general strategies : Fixed channel and dynamic channel allocation.

    Fixed Channel Approach
    One of my primary sources of computer-based entertainment used to be watching music. Sure I listened to it as well. But for things like Amiga MOD files and related tracker formats, there was a rich ecosystem of fun music playback programs that visualized the music. There exist music visualization modes in various music players these days (such as iTunes and Windows Media Player), but those largely just show you a single wave form. These files were real time syntheses based on multiple audio channels and usually showed some form of analysis for each channel. My personal favorite was Cubic Player :


    Open Cubic Player -- oscilloscopes

    Most of these players supported the concept of masking individual channels. In doing so, the user could isolate, study, and enjoy different components of the song. For many 4-channel Amiga MOD files, I observed that the common arrangement was to use the 4 channels for beat (percussion track), bass line, chords, and melody. Thus, it was easy to just listen to, e.g., the bass line in isolation.

    MODs and similar formats specified precisely which digital audio sample to play at what time and on which specific audio channel. To view the internals of one of these formats, one gets the impression that they contain an extremely computer-centric view of music.

    Dynamic Channel Allocation Algorithm
    MODs et al. enjoyed a lot of popularity, but the standard for computer music is MIDI. While MOD and friends took a computer-centric view of music, MIDI takes, well, a music-centric view of music.

    There are MIDI visualization programs as well. The one that came with my Gravis Ultrasound was called PLAYMIDI.EXE. It looked like this…


    Gravis Ultrasound PLAYMIDI.EXE application

    … and it confused me. There are 16 distinct channels being visualized but some channels are shown playing multiple notes. When I dug into the technical details, I learned that MIDI just specifies what notes need to be played, at what times and frequencies and using which instrument samples, and it was the MIDI playback program’s job to make it happen.

    Thus, if a MIDI file specifies that track 1 should play a C major chord consisting of notes C, E, and G, it would transmit events “key-on C ; delta time 0 ; key-on E ; delta time 0 ; key-on G ; delta time … ; [other commands]“. If the playback program has access to multiple channels (say, up to 32, in the case of the GUS), the intuitive approach would be to maintain a pool of all available channels. Then, when it’s time to process the “key-on C” event, fetch the first available channel from the pool, mark it as in-use, play C on the channel, and return that channel to the pool when either the sample runs its course or the corresponding “key-off C” event is encountered in the MIDI command stream.

    About That Game Music
    Circling back around to my game music website, numerous supported systems use the fixed channel approach for playback while others use dynamic channel allocation approach, including evey Nintendo DS game I have so far analyzed.

    Which approach is better ? As in many technical matters, there are trade-offs either way. For many systems, the fixed channel approach is necessary because for many older audio synthesis systems, different channels had very specific purposes. The 8-bit NES had 5 channels : 2 square wave generators (used musically for melody/treble), 1 triangle wave generator (usually used for bass line), a noise generator (subverted for all manner of percussive sounds), and a limited digital channel (was sometimes assigned richer percussive sounds). Dynamic channel allocation wouldn’t work here.

    But the dynamic approach works great on hardware with 16 digital channels available like, for example, the Nintendo DS. Digital channels are very general-purpose. What about the SNES, with its 8 digital channels ? Either approach could work. In practice, most games used a fixed channel approach : Games might use 4-6 channels for music while reserving the remainder for various in-game sound effects. Some notable exceptions to this pattern were David Wise’s compositions for Rare’s SNES games (think Battletoads and the various Donkey Kong Country titles). These clearly use some dynamic channel approach since masking all but one channel will give you a variety of instrument sounds.

    Epilogue
    There ! That took a long time to explain but I find it fascinating for some reason. I need to distill it down to far fewer words because I want to make it a FAQ on my website for “Why can’t I isolate specific tracks for Nintendo DS games ?”

    Actually, perhaps I should remove the ability to toggle Nintendo DS channels in the first place. Here’s a funny tale of needless work : I found the Vio2sf engine for synthesizing Nintendo DS music and incorporated it into the program. It didn’t support toggling of individual channels so I figured out a way to add that feature to the engine. And then I noticed that most Nintendo DS games render that feature moot. After I released the webapp, I learned that I was out of date on the Vio2sf engine. The final insult was that the latest version already supports channel toggling. So I did the work for nothing. But then again, since I want to remove that feature from the UI, doubly so.