
Recherche avancée
Médias (91)
-
Géodiversité
9 septembre 2011, par ,
Mis à jour : Août 2018
Langue : français
Type : Texte
-
USGS Real-time Earthquakes
8 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
SWFUpload Process
6 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
-
Creativecommons informational flyer
16 mai 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (43)
-
L’espace de configuration de MediaSPIP
29 novembre 2010, parL’espace de configuration de MediaSPIP est réservé aux administrateurs. Un lien de menu "administrer" est généralement affiché en haut de la page [1].
Il permet de configurer finement votre site.
La navigation de cet espace de configuration est divisé en trois parties : la configuration générale du site qui permet notamment de modifier : les informations principales concernant le site (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (4783)
-
Bad file descriptor decoding mp3 to pipe with ffmpeg
1er février 2017, par Pete BleackleyCan any FFMpeg gurus help me with the following ?
I’m trying to convert a podcast to PCM and downsample it to 16KHz mono before feeding it to a speech recognition system for transcription. The command line
ffmpeg -i http://media.blubrry.com/conlangery/content.blubrry.com/conlangery/Conlangery01.mp3 -f s16le -ac 1 -ar 16000 pipe:0
fails with
av_interleaved_write_frame() : Bad file descriptor
What is the problem here and how do I fix it ?
EDIT
Full error message is
ffmpeg version N-83189-gd5d474aea5-static http://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 5.4.1 (Debian 5.4.1-4) 20161202
configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc-5 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gray --enable-libass --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libzimg
libavutil 55. 44.100 / 55. 44.100
libavcodec 57. 75.100 / 57. 75.100
libavformat 57. 62.100 / 57. 62.100
libavdevice 57. 2.100 / 57. 2.100
libavfilter 6. 69.100 / 6. 69.100
libswscale 4. 3.101 / 4. 3.101
libswresample 2. 4.100 / 2. 4.100
libpostproc 54. 2.100 / 54. 2.100
Input #0, mp3, from 'http://media.blubrry.com/conlangery/content.blubrry.com/conlangery/Conlangery01.mp3':
Metadata:
track : 1
album : Conlangery Podcast
title : Conlangery 01
artist : George Corley
date : 2011
Duration: 00:44:58.08, start: 0.025057, bitrate: 128 kb/s
Stream #0:0: Audio: mp3, 44100 Hz, stereo, s16p, 128 kb/s
Metadata:
encoder : LAME3.98r
Output #0, s16le, to 'pipe:0':
Metadata:
track : 1
album : Conlangery Podcast
title : Conlangery 01
artist : George Corley
date : 2011
encoder : Lavf57.62.100
Stream #0:0: Audio: pcm_s16le, 16000 Hz, mono, s16, 256 kb/s
Metadata:
encoder : Lavc57.75.100 pcm_s16le
Stream mapping:
Stream #0:0 -> #0:0 (mp3 (native) -> pcm_s16le (native))
Press [q] to stop, [?] for help
av_interleaved_write_frame(): Bad file descriptor
Error writing trailer of pipe:0: Bad file descriptorsize= 1kB time=00:00:00.02 bitrate= 256.0kbits/s speed=95.2x
video:0kB audio:1kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.000000%
Conversion failed!
Traceback (most recent call last):
File "./PodcastTranscriber.py", line 206, in <module>
PodcastTranscriber('http://conlangery.com/feed/',upload)()
File "./PodcastTranscriber.py", line 101, in __call__
self.process(item)
File "./PodcastTranscriber.py", line 136, in process
(audio,errors)=mp3.run(stdout=subprocess.PIPE)
File "/usr/local/lib/python2.7/site-packages/ffmpy.py", line 105, in run
raise FFRuntimeError(self.cmd, self.process.returncode, out[0], out[1])
ffmpy.FFRuntimeError: `ffmpeg -i http://media.blubrry.com/conlangery/content.blubrry.com/conlangery/Conlangery01.mp3 -f s16le -ac 1 -ar 16000 pipe:0` exited with status 1
</module>The invoking code is
def process(self,item):
"""Downloads the audio and transcribes it"""
audio_url=None
print item['title']
for link in item.links:
if link['rel']=='enclosure':
audio_url=link['href']
if audio_url is not None:
pubDate=to_date(item.published_parsed)
if self.lastUploadDate is None or pubDate>self.lastUploadDate:
self.lastUploadDate=pubDate
mp3=ffmpy.FFmpeg(inputs={audio_url:None},
outputs={'pipe:0':['-f','s16le','-ac','1','-ar','16000']})
(audio,errors)=mp3.run(stdout=subprocess.PIPE)
sphinx=subprocess.Popen(['java','-jar','transcriber.jar'],
stdin=audio,
stdout=subprocess.PIPE)
wiki=threading.Thread(target=self.callback,args=(item,sphinx.stdout))
wiki.start()
#mp3.start()
#mp3.join()
sphinx.stdin.close()
wiki.join()
sphinx.wait() -
FFMpeg - Combine multiple filter_complex and overlay functions
7 juin 2016, par Mike JohnsonI am having trouble combining these 3 passes in ffmpeg into a single process.
Is this even possible ?
Pass 1
ffmpeg -y -i C:\Users\MJ\Downloads\20151211_pmoney_pmpod.mp3 -loop 1 -i C:\Users\MJ\Documents\pm1080.png -filter_complex "[0:a]showwaves=s=1920x1080:mode=line,colorkey=0x000000:0.01:0.1,format=yuva420p[v];[1:v][v]overlay=0:270[outv]" -map "[outv]" -pix_fmt yuv420p -map 0:a -c:v libx264 -c:a copy -shortest C:\Users\MJ\Documents\20151211_pmoney_pmpod4.mp4
Pass 2
ffmpeg -i "C:\Users\MJ\Documents\20151211_pmoney_pmpod4.mp4" -vf drawtext="fontsize=50:fontcolor=white:fontfile=/Windows/Fonts/impact.ttf:text=Planet Money Podcast on NPR - A/B Split Testing:x=(w-text_w)/2:y=200" -acodec copy "C:\Users\MJ\Documents\20151211_pmoney_pmpod-overlay-text.mp4"
Pass 3
ffmpeg -i "C:\Users\MJ\Documents\20151211_pmoney_pmpod-overlay-text.mp4" -i C:\Users\MJ\Downloads\6.png -filter_complex "overlay=10:10" C:\Users\MJ\Documents\20151211_pmoney_pmpod-overlay-text1.mp4"
Thanks !
-
How to Synchronize Audio with Video Frames [Python]
19 septembre 2023, par РостиславI want to stream video from URL to a server via socket, which then restreams it to all clients in the room.


This code streams video frame by frame :


async def stream_video(room, url):
 cap = cv2.VideoCapture(url)
 fps = round(cap.get(cv2.CAP_PROP_FPS))

 while True:
 ret, frame = cap.read()
 if not ret: break
 _, img_bytes = cv2.imencode(".jpg", frame)
 img_base64 = base64.b64encode(img_bytes).decode('utf-8')
 img_data_url = f"data:image/jpeg;base64,{img_base64}"

 await socket.emit('segment', { 'room': room, 'type': 'video', 'stream': img_data_url})
 await asyncio.sleep(1/fps)
 
 cap.release()



And this is code for stream audio :


async def stream_audio(room, url):
 sample_size = 14000
 cmd_audio = [
 "ffmpeg",
 "-i", url,
 '-vn',
 '-f', 's16le',
 '-c:a', 'pcm_s16le',
 "-ac", "2",
 "-sample_rate","48000",
 '-ar','48000',
 "-acodec","libmp3lame",
 "pipe:1"
 ]
 proc_audio = await asyncio.create_subprocess_exec(
 *cmd_audio, stdout=subprocess.PIPE, stderr=False
 )

 while True:
 audio_data = await proc_audio.stdout.read(sample_size)
 if audio_data:
 await socket.emit('segment', { 'room': room, 'type': 'audio', 'stream': audio_data})
 await asyncio.sleep(1)




But the problem is : how to synchronize them ? How many bytes need to be read every second from ffmpeg so that the audio matches the frames.


I tried to do this, but the problem with the number of chunks still remained :


while True:
 audio_data = await proc_audio.stdout.read(sample_size)
 if audio_data:
 await socket.emit('segment', { 'room': room, 'type': 'audio', 'stream': audio_data})

 for i in range(fps):
 ret, frame = cap.read()
 if not ret: break
 _, img_bytes = cv2.imencode(".jpg", frame)
 img_base64 = base64.b64encode(img_bytes).decode('utf-8')
 img_data_url = f"data:image/jpeg;base64,{img_base64}"

 await socket.emit('segment', { 'room': room, 'type': 'video', 'stream': img_data_url})
 await asyncio.sleep(1/fps)



I also tried loading a chunk of audio into pydub, but it shows that the duration of my 14000 chunk is 0.07s, which is very small. And if you increase the number of chunks for reading to 192k (as the gpt chat says), then the audio will simply play very, very quickly.
The ideal number of chunks that I was able to achieve is approximately 14000, but the audio is still not synchronous.