
Recherche avancée
Médias (91)
-
Géodiversité
9 septembre 2011, par ,
Mis à jour : Août 2018
Langue : français
Type : Texte
-
USGS Real-time Earthquakes
8 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
SWFUpload Process
6 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
-
Creativecommons informational flyer
16 mai 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (16)
-
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...) -
Configuration spécifique pour PHP5
4 février 2011, parPHP5 est obligatoire, vous pouvez l’installer en suivant ce tutoriel spécifique.
Il est recommandé dans un premier temps de désactiver le safe_mode, cependant, s’il est correctement configuré et que les binaires nécessaires sont accessibles, MediaSPIP devrait fonctionner correctement avec le safe_mode activé.
Modules spécifiques
Il est nécessaire d’installer certains modules PHP spécifiques, via le gestionnaire de paquet de votre distribution ou manuellement : php5-mysql pour la connectivité avec la (...) -
Contribute to documentation
13 avril 2011Documentation is vital to the development of improved technical capabilities.
MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
To contribute, register to the project users’ mailing (...)
Sur d’autres sites (4309)
-
How can I correctly provide a mock webcam video to Chrome ?
15 décembre 2022, par doppelgreenerI'm trying to run end-to-end testing in Chrome for a product that requires a webcam feed halfway through to operate. From what I understand this means providing a fake webcam video to Chrome using the
--use-file-for-fake-video-capture="/path/to/video.y4m"
command line argument. It will then use that as a webcam video.


However, no matter what y4m file I provide, I get the following error from Chrome running under these conditions :



DOMException: Could not start video source
{
 code: 0,
 message: "Could not start video source",
 name: "NotReadableError"
}




Notably I can provide an audio file just fine using
--use-file-for-fake-audio-capture
and Chrome will work with it well. The video has been my sticking point.


This error comes out of the following straightforward mediaDevices request :



navigator.mediaDevices.getUserMedia({ video: true, audio: true })
 .then(data => {
 // do stuff
 })
 .catch(err => {
 // oh no!
 });




(This always hits the “oh no !” branch when a video file is provided.)



What I've tried so far



I've been running Chrome with the following command line arguments (newlines added for readability), and I'm using a Mac hence the
open
command :




open -a "Google Chrome" --args
 --disable-gpu
 --use-fake-device-for-media-stream
 --use-file-for-fake-video-capture="~/Documents/mock/webcam.y4m"
 --use-file-for-fake-audio-capture="~/Documents/mock/microphone.wav"




webcam.y4m
andmicrophone.wav
were generated from a video file I recorded.


I first recorded a twenty-second mp4 video using my browser's MediaRecorder, downloaded the result, and converted it using the following command line commands :



ffmpeg -y -i original.mp4 -f wav -vn microphone.wav
ffmpeg -y -i original.mp4 webcam.y4m




When this didn't work, I tried the same using a twenty-second movie file I recorded in Quicktime :



ffmpeg -y -i original.mov -f wav -vn microphone.wav
ffmpeg -y -i original.mov webcam.y4m




When that also failed, I went straight to the Chromium file that explains fake video capture, went to the example y4m file list it provided, and downloaded the grandma file and provided that as a command line argument to Chrome instead :



open -a "Google Chrome" --args
 --disable-gpu
 --use-fake-device-for-media-stream
 --use-file-for-fake-video-capture="~/Documents/mock/grandma_qcif.y4m"
 --use-file-for-fake-audio-capture="~/Documents/mock/microphone.wav"




Chrome provides me with the exact same error in all of these situations.



The only time Chrome doesn't error out with that mediaDevices request is when I omit the video completely :



open -a "Google Chrome" --args
 --disable-gpu
 --use-fake-device-for-media-stream
 --use-file-for-fake-audio-capture="~/Documents/mock/microphone.wav"




Accounting for C420mpeg2



TestRTC suggests Chrome will “crash” if I give it a
C420mpeg2
file, and recommends that simply replacing the metadata fixes the issue. Indeed the video file I generate from ffmpeg gives me the following header :


YUV4MPEG2 W1280 H720 F30:1 Ip A1:1 C420mpeg2 XYSCSS=420MPEG2




Chrome doesn't actually crash when run with this file, I just get the error above. If I edit the video file to the following header though per TestRTC's recommendations I get the same situation :



YUV4MPEG2 W1280 H720 F30:1 Ip A1:1 C420 XYSCSS=420MPEG2




The video file still gives me the above error in these conditions.



What can/should I do ?



How should I be providing a video file to Chrome for this command line argument ?



How should I be recording or creating the video file ?



How should I convert it to y4m ?


-
Problems piping ffmpeg to flac encoder
19 mai 2019, par Sebastian OlsenI need to encode a flac file with seektables, ffmpeg’s flac encoder does not include seektables, so I need to use the flac CLI. I’m trying to make it possible to convert any arbitrary audio file to a seekable flac file by first piping it through ffmpeg, then to the flac encoder.
export const transcodeToFlac: AudioTranscoder<{}> = ({
source,
destination
}) => {
return new Promise((resolve, reject) => {
let totalSize = 0
const { stdout: ffmpegOutput, stderr: ffmpegError } = spawn("ffmpeg", [
"-i",
source,
"-f",
"wav",
"pipe:1"
])
const { stdout: flacOutput, stdin: flacInput, stderr: flacError } = spawn(
"flac",
["-"]
)
flacOutput.on("data", (buffer: Buffer) => {
totalSize += buffer.byteLength
})
ffmpegError.on("data", error => {
console.log(error.toString())
})
flacError.on("data", error => {
console.log(error.toString())
})
//stream.on("error", reject)
destination.on("finish", () => {
resolve({
mime: "audio/flac",
size: totalSize,
codec: "flac",
bitdepth: 16,
ext: "flac"
})
})
ffmpegOutput.pipe(flacInput)
flacOutput.pipe(destination)
})
}While this code works, the resulting flac file is not correct. The source audio is of duration
06:14
, but the flac file is of duration06:45:47
. Encoding the flac manually without piping ffmpeg to it works fine, but I cannot do that in a server environment where I need to utilize streams.Here’s what the flac encoder outputs when transcoding :
flac 1.3.2
Copyright (C) 2000-2009 Josh Coalson, 2011-2016 Xiph.Org Foundation
flac comes with ABSOLUTELY NO WARRANTY. This is free software, and you are
welcome to redistribute it under certain conditions. Type `flac' for details.
-: WARNING: skipping unknown chunk 'LIST' (use --keep-foreign-metadata to keep)
-: WARNING, cannot write back seekpoints when encoding to stdout
-: 0% complete, ratio=0.357
0% complete, ratio=0.432
0% complete, ratio=0.482
0% complete, ratio=0.527
0% complete, ratio=0.541
1% complete, ratio=0.554
1% complete, ratio=0.563
1% complete, ratio=0.571
size= 36297kB time=00:03:30.70 bitrate=1411.2kbits/s speed= 421x
1% complete, ratio=0.572
1% complete, ratio=0.570
1% complete, ratio=0.577
1% complete, ratio=0.583
1% complete, ratio=0.584
1% complete, ratio=0.590
1% complete, ratio=0.592
size= 64512kB time=00:06:14.49 bitrate=1411.2kbits/s speed= 421x
video:0kB audio:64512kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead:
0.000185%
-: WARNING: unexpected EOF; expected 1073741823 samples, got 16510976 samples
2% complete, ratio=0.579 -
PyAV : force new framerate while remuxing stream ?
7 juin 2019, par ToxicFrogI have a Python program that receives a sequence of H264 video frames over the network, which I want to display and, optionally, record. The camera records at 30FPS and sends frames as fast as it can, which isn’t consistently 30FPS due to changing network conditions ; sometimes it falls behind and then catches up, and rarely it drops frames entirely.
The "display" part is easy ; I don’t need to care about timing or stream metadata, just display the frames as fast as they arrive :
input = av.open(get_video_stream())
for packet in input.demux(video=0):
for frame in packet.decode():
# A bunch of numpy and pygame code here to convert the frame to RGB
# row-major and blit it to the screenThe "record" part looks like it should be easy :
input = av.open(get_video_stream())
output = av.open(filename, 'w')
output.add_stream(template=input.streams[0])
for packet in input.demux(video=0):
for frame in packet.decode():
# ...display code...
packet.stream = output.streams[0]
output.mux_one(packet)
output.close()And indeed this produces a valid MP4 file containing all the frames, and if I play it back with
mplayer -fps 30
it works fine. But that-fps 30
is absolutely required :$ ffprobe output.mp4
Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 960x720,
1277664 kb/s, 12800 fps, 12800 tbr, 12800 tbn, 25600 tbc (default)Note that 12,800 frames/second. It should look something like this (produced by calling
mencoder -fps 30
and piping the frames into it) :$ ffprobe mencoder_test.mp4
Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 960x720,
2998 kb/s, 30 fps, 30 tbr, 90k tbn, 180k tbc (default)Inspecting the packets and frames I get from the input stream, I see :
stream: time_base=1/1200000
codec: framerate=25 time_base=1/50
packet: dts=None pts=None duration=48000 time_base=1/1200000
frame: dst=None pts=None time=None time_base=1/1200000So, the packets and frames don’t have timestamps at all ; they have a time_base which doesn’t match either the timebase that ends up in the final file or the actual framerate of the camera ; the codec has a framrate and timebase that doesn’t match the final file, the camera framerate, or the other video stream metadata !
The PyAV documentation is all but entirely absent when it comes to issues of timing and framerate, but I have tried manually setting various combinations of stream, packet, and frame
time_base
,dts
, andpts
with no success. I can always remux the recorded videos again to get the correct framerate, but I’d rather write video files that are correct in the first place.So, how do I get pyAV to remux the video in a way that produces an output that is correctly marked as 30fps ?