
Recherche avancée
Médias (91)
-
MediaSPIP Simple : futur thème graphique par défaut ?
26 septembre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Video
-
avec chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
sans chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
config chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
SPIP - plugins - embed code - Exemple
2 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
Autres articles (30)
-
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (6989)
-
ffmpeg failing to add png mask to video : Requested planes not available
23 août 2022, par Alexandr SugakI am trying to add png mask to make webm video round (cut off its corners).


The command I am using :


video="./dist/tmp/19_2.webm"
mask="./dist/tmp/mask.png"
output="./dist/tmp/circle.webm"

ffmpeg -report -c:v libvpx-vp9 -i "${video}" -loop 1 -i "${mask}" -filter_complex " \
[1:v]alphaextract[alf];\
[0:v][alf]alphamerge" \
-c:a copy -c:v libvpx-vp9 "${output}"



The command output :


sh ./scripts/video_mask.sh 
ffmpeg started on 2022-08-23 at 17:27:48
Report written to "ffmpeg-20220823-172748.log"
Log level: 48
ffmpeg version 5.1-tessus Copyright (c) 2000-2022 the FFmpeg developers
 built with Apple clang version 11.0.0 (clang-1100.0.33.17)
 configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg --extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl --enable-libaom --enable-libass --enable-libbluray --enable-libdav1d --enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libmysofa --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvmaf --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-version3 --pkg-config-flags=--static --disable-ffplay
 libavutil 57. 28.100 / 57. 28.100
 libavcodec 59. 37.100 / 59. 37.100
 libavformat 59. 27.100 / 59. 27.100
 libavdevice 59. 7.100 / 59. 7.100
 libavfilter 8. 44.100 / 8. 44.100
 libswscale 6. 7.100 / 6. 7.100
 libswresample 4. 7.100 / 4. 7.100
 libpostproc 56. 6.100 / 56. 6.100
[libvpx-vp9 @ 0x7fa072f05140] v1.11.0-30-g888bafc78
 Last message repeated 1 times
Input #0, matroska,webm, from './dist/tmp/19_2.webm':
 Metadata:
 ENCODER : Lavf59.27.100
 Duration: 00:00:02.77, start: -0.007000, bitrate: 308 kb/s
 Stream #0:0(eng): Video: vp9 (Profile 0), yuva420p(tv, unknown/bt709/iec61966-2-1, progressive), 640x480, SAR 1:1 DAR 4:3, 1k tbr, 1k tbn (default)
 Metadata:
 ALPHA_MODE : 1
 ENCODER : Lavc59.37.100 libvpx-vp9
 DURATION : 00:00:02.744000000
 Stream #0:1(eng): Audio: opus, 48000 Hz, mono, fltp (default)
 Metadata:
 ENCODER : Lavc59.37.100 libopus
 DURATION : 00:00:02.767000000
Input #1, png_pipe, from './dist/tmp/mask.png':
 Duration: N/A, bitrate: N/A
 Stream #1:0: Video: png, pal8(pc), 640x480 [SAR 2835:2835 DAR 4:3], 25 fps, 25 tbr, 25 tbn
[libvpx-vp9 @ 0x7fa082f04880] v1.11.0-30-g888bafc78
Stream mapping:
 Stream #0:0 (libvpx-vp9) -> alphamerge
 Stream #1:0 (png) -> alphaextract:default
 alphamerge:default -> Stream #0:0 (libvpx-vp9)
 Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
[libvpx-vp9 @ 0x7fa082f04880] v1.11.0-30-g888bafc78
[Parsed_alphaextract_0 @ 0x7fa083906e80] Requested planes not available.
[Parsed_alphaextract_0 @ 0x7fa083906e80] Failed to configure input pad on Parsed_alphaextract_0
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Error while processing the decoded data for stream #0:0
Conversion failed!



I've tried different combination of codecs and pixel formats but I still get the same error. My initial understanding was that ffmpeg fails to find the alpha channel in the input video. By setting
-c:v libvpx-vp9
option it looks like ffmpeg correctly picks upyuva420p
pixel format but it still gives the same error.

What I am doing wrong ?


Update : if I remove the alphaextract step as suggested in comments, the ffmpeg starts processing video indefinitely (the video I use to test is only 2 sec long). If I specify the number of frames manually, then the output is generated but the mask does not seem to have any effect :


ffmpeg -c:v libvpx-vp9 -i "${video}" -loop 1 -i "${mask}" -filter_complex " \
[0:v][1:v]alphamerge" \
-c:a copy -b:v 2000k -vframes 60 "${output}"



sh ./scripts/video_mask.sh 
ffmpeg version 5.1-tessus Copyright (c) 2000-2022 the FFmpeg developers
 built with Apple clang version 11.0.0 (clang-1100.0.33.17)
 configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg --extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl --enable-libaom --enable-libass --enable-libbluray --enable-libdav1d --enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libmysofa --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvmaf --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-version3 --pkg-config-flags=--static --disable-ffplay
 libavutil 57. 28.100 / 57. 28.100
 libavcodec 59. 37.100 / 59. 37.100
 libavformat 59. 27.100 / 59. 27.100
 libavdevice 59. 7.100 / 59. 7.100
 libavfilter 8. 44.100 / 8. 44.100
 libswscale 6. 7.100 / 6. 7.100
 libswresample 4. 7.100 / 4. 7.100
 libpostproc 56. 6.100 / 56. 6.100
[libvpx-vp9 @ 0x7fdd6b005f00] v1.11.0-30-g888bafc78
 Last message repeated 1 times
Input #0, matroska,webm, from './dist/tmp/19_2.webm':
 Metadata:
 ENCODER : Lavf59.27.100
 Duration: 00:00:02.77, start: -0.007000, bitrate: 308 kb/s
 Stream #0:0(eng): Video: vp9 (Profile 0), yuva420p(tv, unknown/bt709/iec61966-2-1, progressive), 640x480, SAR 1:1 DAR 4:3, 1k tbr, 1k tbn (default)
 Metadata:
 ALPHA_MODE : 1
 ENCODER : Lavc59.37.100 libvpx-vp9
 DURATION : 00:00:02.744000000
 Stream #0:1(eng): Audio: opus, 48000 Hz, mono, fltp (default)
 Metadata:
 ENCODER : Lavc59.37.100 libopus
 DURATION : 00:00:02.767000000
Input #1, png_pipe, from './dist/tmp/mask.png':
 Duration: N/A, bitrate: N/A
 Stream #1:0: Video: png, pal8(pc), 640x480 [SAR 2835:2835 DAR 4:3], 25 fps, 25 tbr, 25 tbn
File './dist/tmp/circle.webm' already exists. Overwrite? [y/N] y
[libvpx-vp9 @ 0x7fdd6b007ec0] v1.11.0-30-g888bafc78
Stream mapping:
 Stream #0:0 (libvpx-vp9) -> alphamerge
 Stream #1:0 (png) -> alphamerge
 alphamerge:default -> Stream #0:0 (libvpx-vp9)
 Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
[libvpx-vp9 @ 0x7fdd6b007ec0] v1.11.0-30-g888bafc78
[libvpx-vp9 @ 0x7fdd6b024580] v1.11.0-30-g888bafc78
Output #0, webm, to './dist/tmp/circle.webm':
 Metadata:
 encoder : Lavf59.27.100
 Stream #0:0: Video: vp9, yuva420p(tv, unknown/bt709/iec61966-2-1, progressive), 640x480 [SAR 1:1 DAR 4:3], q=2-31, 2000 kb/s, 1k fps, 1k tbn
 Metadata:
 encoder : Lavc59.37.100 libvpx-vp9
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
 Stream #0:1(eng): Audio: opus, 48000 Hz, mono, fltp (default)
 Metadata:
 ENCODER : Lavc59.37.100 libopus
 DURATION : 00:00:02.767000000
frame= 60 fps= 16 q=2.0 Lsize= 285kB time=00:00:01.98 bitrate=1175.5kbits/s speed=0.526x 
video:270kB audio:11kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.529399%



-
FFMPEG Loudnorm reading JSON data
9 juillet 2022, par NineCattoRulesI tried to normalize some audio files using FFMPEG Loudnorm as described here.


However, in Python, I don't understand how to read data info from 1st pass.


My code :


getLoud = subprocess.Popen(f"ffmpeg -i {file_path} -filter:a loudnorm=print_format=json -f null NULL", shell=True, stdout=subprocess.PIPE).stdout
getLoud = getLoud.read().decode()
# parse json_str:
jsonstr_loud = json.loads(getLoud)



This gives me
"errorMessage": "Expecting value: line 1 column 1 (char 0)"


I tried also this :


os.system(f"ffmpeg -i {file_path} -filter:a loudnorm=print_format=json -f null NULL")



and it outputs :


ffmpeg version N-60236-gffb000fff8-static https://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2022 the FFmpeg developers...
...
[Parsed_loudnorm_0 @ 0x5921940] 
{
 "input_i" : "-9.33",
 "input_tp" : "-0.63",
 "input_lra" : "0.60",
 "input_thresh" : "-19.33",
 "output_i" : "-24.08",
 "output_tp" : "-15.40",
 "output_lra" : "0.60",
 "output_thresh" : "-34.08",
 "normalization_type" : "dynamic",
 "target_offset" : "0.08"
}



In Python, how can I use those parameters, such as
input_i
,input_tp
etc. that I need for the 2nd pass ?

I can't use
ffmpeg-normalize
because I'm using FFMPEG as a Layer in Lambda.

-
OSError : [Errno 9] Bad file descriptor when downloading using pytube and playing with discord.py
15 septembre 2022, par Trevor MathisenWhen using pytube to download a YouTube video and discord.py to play it, I am getting a OSError : [Errno 9] Bad file descriptor error. I've had this working at one point and can't seem to figure out what I changed which broke it.


Full traceback :


2022-09-15 20:20:44 INFO discord.voice_client The voice handshake is being terminated for Channel ID 902294184994693224 (Guild ID 902294184994693220)
2022-09-15T20:20:44.010142763Z 2022-09-15 20:20:44 INFO discord.voice_client Disconnecting from voice normally, close code 1000.
2022-09-15T20:20:44.058592513Z 2022-09-15 20:20:44 ERROR discord.player Exception in voice thread Thread-5
2022-09-15T20:20:44.058623864Z Traceback (most recent call last):
2022-09-15T20:20:44.058629130Z File "/usr/local/lib/python3.10/dist-packages/discord/player.py", line 698, in run
2022-09-15T20:20:44.058632003Z self._do_run()
2022-09-15T20:20:44.058634500Z File "/usr/local/lib/python3.10/dist-packages/discord/player.py", line 691, in _do_run
2022-09-15T20:20:44.058637013Z play_audio(data, encode=not self.source.is_opus())
2022-09-15T20:20:44.058639334Z File "/usr/local/lib/python3.10/dist-packages/discord/voice_client.py", line 683, in send_audio_packet
2022-09-15T20:20:44.058648759Z self.socket.sendto(packet, (self.endpoint_ip, self.voice_port))
2022-09-15T20:20:44.058653057Z OSError: [Errno 9] Bad file descriptor
2022-09-15T20:20:44.058673762Z 2022-09-15 20:20:44 INFO discord.player ffmpeg process 12 has not terminated. Waiting to terminate...
2022-09-15T20:20:44.062083854Z 2022-09-15 20:20:44 INFO discord.player ffmpeg process 12 should have terminated with a return code of -9.



Dockerfile :


ENV MEDIA_DIR='/media/'
RUN mkdir -p /media



Download function (check_path returns True, showing the file is there) :


def download_videos(stream, filename):
 print(f'downloading {filename}')
 filepath = stream.download(output_path=config_db.environ_path, filename=f'{filename}.mp4')
 print(f'completed download of {filepath}')
 check_path = Path(filepath)
 if check_path.is_file():
 print(check_path)
 return filepath



Play function :


async def play(voice_channel, message, control = None):
 vc = get_vc()
 if not vc:
 vc = await voice_channel.connect()
 next_yt = YouTube(next_song)
 next_file = sub(r'\W+', '', next_yt.title.replace(' ', '_').replace('__', '_')).lower()
 next_song_path = download_videos(next_yt.streams.filter(only_audio=True, file_extension='mp4')[0], next_file)

 await message.channel.send(f'Done getting ready, I\'ll be there in a moment.')

while next_song:
 while vc.is_playing():
 await asynciosleep(0.5)
 continue

 try:
 vc.play(FFmpegPCMAudio(next_song_path))
 print(f'Playing {next_song_path} with latency of {vc.average_latency}')
 vc.source = PCMVolumeTransformer(vc.source, volume=0.15)
 except Exception as e:
 print(e)
 await vc.disconnect()
 next_song = None
 return
 next_song = sounds_db.get_next_song()
 next_yt = YouTube(next_song) if next_song else None
 next_file = sub(r'\W+', '', next_yt.title.replace(' ', '_').replace('__', '_')).lower() if next_song else None
 next_song_path = download_videos(next_yt.streams.filter(only_audio=True, file_extension='mp4')[0], next_file) if next_song else None



I've mounted the /media/ dir during docker -v and can see the file is getting downloaded and when I copy the file to my local machine I can play it in an audio player.


The program can access the SQLite database right next to the files in question just fine.


I've deployed the container locally and to two different VPS's with the same file structure with the same behavior. I'm ripping my hair out.