
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (76)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (5791)
-
Search For Specific Values Result in Python
5 mai 2020, par jamlotI am attempting to write a Python script that looks for black video and silent audio in a file, and returns only the time instances when they occur.



I have the following code working using the ffmpeg-python wrapper, but I can't figure out an efficient way to parse the stdout or stderror to return only the instances of black_start, black_end, black_duration, silence_start, silence_end, silence_duration.



Putting ffmpeg aside for those who are not experts, how can I use re.findall or similar to define the regex to return only the above values ?



import ffmpeg 

input = ffmpeg.input(source)
video = input.video.filter('blackdetect', d=0, pix_th=0.00)
audio = input.audio.filter('silencedetect', d=0.1, n='-60dB')
out = ffmpeg.output(audio, video, 'out.null', format='null')
run = out.run_async(pipe_stdout=True, pipe_stderr=True)
result = run.communicate()

print(result)




This results in the ffmpeg output, which contains the results I need. Here is the output (edited for brevity) :



(b'', b"ffmpeg version 4.2.2 Copyright (c) 2000-2019 the FFmpeg developers
 built with Apple clang version 11.0.0 (clang-1100.0.33.17)
 configuration: --prefix=/usr/local/Cellar/ffmpeg/4.2.2_3 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags=-fno-stack-check --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr --enable-videotoolbox --disable-libjack --disable-indev=jack
 libavutil 56. 31.100 / 56. 31.100
 libavcodec 58. 54.100 / 58. 54.100
 libavformat 58. 29.100 / 58. 29.100
 libavdevice 58. 8.100 / 58. 8.100
 libavfilter 7. 57.100 / 7. 57.100
 libavresample 4. 0. 0 / 4. 0. 0
 libswscale 5. 5.100 / 5. 5.100
 libswresample 3. 5.100 / 3. 5.100
 libpostproc 55. 5.100 / 55. 5.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/Users/otoolej/Documents/_lab/source/black-silence-detect/AUUV71900381_test.mov':
 Metadata:
 major_brand : qt 
 minor_version : 537199360
 compatible_brands: qt 
 creation_time : 2019-11-14T04:12:49.000000Z
 Duration: 00:03:50.28, start: 0.000000, bitrate: 185168 kb/s
 Stream #0:0(eng): Video: prores (HQ) (apch / 0x68637061), yuv422p10le(tv, bt709, progressive), 1920x1080, 183596 kb/s, SAR 1:1 DAR 16:9, 25 fps, 25 tbr, 25 tbn, 25 tbc (default)
 Metadata:
 creation_time : 2019-11-14T04:12:49.000000Z
 handler_name : Apple Video Media Handler
 encoder : Apple ProRes 422 (HQ)
 timecode : 00:00:00:00
 Stream #0:1(eng): Audio: pcm_s16le (sowt / 0x74776F73), 48000 Hz, stereo, s16, 1536 kb/s (default)
 Metadata:
 creation_time : 2019-11-14T04:12:49.000000Z
 handler_name : Apple Sound Media Handler
 timecode : 00:00:00:00
 Stream #0:2(eng): Data: none (tmcd / 0x64636D74) (default)
 Metadata:
 creation_time : 2019-11-14T04:12:49.000000Z
 handler_name : Time Code Media Handler
 timecode : 00:00:00:00
Only '-vf blackdetect=d=0:pix_th=0.00' read, ignoring remaining -vf options: Use ',' to separate filters
Only '-af silencedetect=d=0.1:n=-60dB' read, ignoring remaining -af options: Use ',' to separate filters
Stream mapping:
 Stream #0:0 -> #0:0 (prores (native) -> wrapped_avframe (native))
 Stream #0:1 -> #0:1 (pcm_s16le (native) -> pcm_s16le (native))
Press [q] to stop, [?] for help
Output #0, null, to 'pipe:':
 Metadata:
 major_brand : qt 
 minor_version : 537199360
 compatible_brands: qt 
 encoder : Lavf58.29.100
 Stream #0:0(eng): Video: wrapped_avframe, yuv422p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 25 fps, 25 tbn, 25 tbc (default)
 Metadata:
 creation_time : 2019-11-14T04:12:49.000000Z
 handler_name : Apple Video Media Handler
 timecode : 00:00:00:00
 encoder : Lavc58.54.100 wrapped_avframe
 Stream #0:1(eng): Audio: pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/s (default)
 Metadata:
 creation_time : 2019-11-14T04:12:49.000000Z
 handler_name : Apple Sound Media Handler
 timecode : 00:00:00:00
 encoder : Lavc58.54.100 pcm_s16le
[silencedetect @ 0x7fdd82d011c0] silence_start: 0
frame= 112 fps=0.0 q=-0.0 size=N/A time=00:00:05.00 bitrate=N/A speed=9.96x 
[blackdetect @ 0x7fdd82e06580] black_start:0 black_end:5 black_duration:5
[silencedetect @ 0x7fdd82d011c0] silence_end: 5.06285 | silence_duration: 5.06285
frame= 211 fps=210 q=-0.0 size=N/A time=00:00:09.00 bitrate=N/A speed=8.97x 
frame= 319 fps=212 q=-0.0 size=N/A time=00:00:13.00 bitrate=N/A speed=8.63x 
frame= 427 fps=213 q=-0.0 size=N/A time=00:00:17.08 bitrate=N/A speed=8.51x 
frame= 537 fps=214 q=-0.0 size=N/A time=00:00:22.00 bitrate=N/A speed=8.77x 
frame= 650 fps=216 q=-0.0 size=N/A time=00:00:26.00 bitrate=N/A speed=8.63x 
frame= 761 fps=217 q=-0.0 size=N/A time=00:00:31.00 bitrate=N/A speed=8.82x 
frame= 874 fps=218 q=-0.0 size=N/A time=00:00:35.00 bitrate=N/A speed=8.71x 
frame= 980 fps=217 q=-0.0 size=N/A time=00:00:39.20 bitrate=N/A speed=8.67x 
... 
frame= 5680 fps=213 q=-0.0 size=N/A time=00:03:47.20 bitrate=N/A speed=8.53x 
[silencedetect @ 0x7fdd82d011c0] silence_start: 227.733
[silencedetect @ 0x7fdd82d011c0] silence_end: 229.051 | silence_duration: 1.3184
[silencedetect @ 0x7fdd82d011c0] silence_start: 229.051
[blackdetect @ 0x7fdd82e06580] black_start:229.28 black_end:230.24 black_duration:0.96
frame= 5757 fps=214 q=-0.0 Lsize=N/A time=00:03:50.28 bitrate=N/A speed=8.54x 
video:3013kB audio:43178kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
[silencedetect @ 0x7fdd82d011c0] silence_end: 230.28 | silence_duration: 1.22856
\n")




What is the most efficient way to parse the output data to find/return only those result values so I can build further logic from them in my code ? In this case, I would want only the following values returned :



silence_start : 0

silence_end : 5.06285

silence_duration : 5.06285


black_start:0

black_end:5

black_duration:5


silence_start : 227.733

silence_end : 229.051

silence_duration : 1.3184


black_start:229.28

black_end:230.24

black_duration:0.96


silence_start : 229.051

silence_end : 230.28

silence_duration : 1.22856


I think there is a way to get only those values using ffprobe, but I couldn't get that to work within the wrapper method. Possible I would have to run ffprobe as a subprocess and parse that result somehow. That would be a total re-do though.


-
How do I search for files inside a directory, then burn the name of that directory into the files inside using ffmpeg ?
23 janvier 2021, par ematograI'm struggling to articulate this question- but I'm after a way of searching for files inside any directory that may exist in my search area (ie. not specifying the name of the directory), then using that directory name as a burn in on the files I have inside it, using ffmpeg ?


So for example, say I had a folder with my script inside it. I've just created a folder called "day 01" inside the folder with the script, with some mxf files inside that. If I run my script, I want it to find the mxf files inside "day 01" then run ffmpeg and have that write "day 01" as a burn in on the picture of those mxf files.


I know how to do the burn in, I just don't know how to reference the directory "day 01".


Hope that makes sense. Thanks in advance.


-
FFMPEG file writer in python 2.7
6 avril 2017, par byBanachTarskiIamcorrectTrying to animate a string in python, i think my code is fine but just having difficulties with the file writer. My code is (based off https://jakevdp.github.io/blog/2012/08/18/matplotlib-animation-tutorial/) :
import numpy as np
import scipy as sci
import matplotlib.pyplot as plt
import matplotlib.animation as animation
plt.rcParams['animation.ffmpeg_path'] = 'C:\FFMPEG\bin\ffmpeg'
s1=10.15
gamma=(np.pi*np.sqrt(2))/2
gamma=sci.special.jn_zeros(0,10)
gamma1=gamma[9]
gamma2=gamma[8]
print gamma1,gamma2
sigma=np.linspace(0,2*s1,10000)
def xprime(sigma,t):
alpha = gamma1*(np.cos(np.pi*t/s1)*np.cos((np.pi*sigma)/s1))
beta = gamma1*(np.sin(np.pi*t/s1)*np.sin((np.pi*sigma)/s1))
xprime=np.cos(alpha)*np.cos(beta)
return xprime
def yprime(sigma,t):
alpha = gamma2*(np.cos(np.pi*t/s1)*np.cos((np.pi*sigma)/s1))
beta = gamma2*(np.sin(np.pi*t/s1)*np.sin((np.pi*sigma)/s1))
yprime=np.cos(alpha)*np.sin(beta)
return yprime
fig = plt.figure()
ax = plt.axes(xlim=(-0.4, 0.4), ylim=(-3, 3))
line, = ax.plot([], [], lw=2)
def init():
line.set_data([], [])
return line,
def animate(i):
sigma=np.linspace(0,2*s1,10000)
t = (i*2*s1)/200
yint=sci.integrate.cumtrapz(yprime(sigma,t),sigma)
xint=sci.integrate.cumtrapz(xprime(sigma,t),sigma)
line.set_data(xint, yint)
return line,
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=200, interval=20, blit=True)
FFwriter=animation.FFMpegWriter()
anim.save('basic_animation.mp4', writer=FFwriter, fps=30, extra_args=['-vcodec', 'libx264'])
plt.show()Currently getting the error message
RuntimeError: Passing in values for arguments for arguments fps, codec, bitrate, extra_args, or metadata is not supported when writer is an existing MovieWriter instance. These should instead be passed as arguments when creating the MovieWriter instance.'
I think my error is in the calling or placement of the FFMpeg file but i’m unsure what i’m doing wrong. Probably very obvious but can’t see it at the moment / unsure what the error message actually means.