
Recherche avancée
Médias (91)
-
Spitfire Parade - Crisis
15 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Wired NextMusic
14 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
-
Sintel MP4 Surround 5.1 Full
13 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (68)
-
Organiser par catégorie
17 mai 2013, parDans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...) -
Récupération d’informations sur le site maître à l’installation d’une instance
26 novembre 2010, parUtilité
Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...) -
Le plugin : Podcasts.
14 juillet 2010, parLe problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
Types de fichiers supportés dans les flux
Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)
Sur d’autres sites (2425)
-
Store converted video in a buffer to transfer to S3
24 mai 2012, par ChrisI have a system set up to upload an image, take that temporarily uploaded file, convert it to a resized jpeg, read the buffer of that conversion and send the buffer to amazon S3 for storage as an image. It works wonderfully because no permanent file is stored on the my server, everything is on S3.
Now I am attempting to add this same functionality but with video. The process goes through, but the resulting files stored on Amazons S3 servers are 1.5kb a piece, instead of multimb videos.
My code is as follows :
public function transfer($method, $file, $bucketName, $filename, $contentType){
switch($method){
case 'file':
if($this->s3->putObject($this->s3->inputFile($file, false), $bucketName, $filename, S3::ACL_PUBLIC_READ, array(), $contentType))
return true;
else
return false;
break;
case 'string':
if ($this->s3->putObject($file, $bucketName, $filename, S3::ACL_PUBLIC_READ, array(), $contentType))
return true;
else
return false;
break;
case 'resource':
if($this->s3->putObject($this->s3->inputResource(fopen($file, 'rb'), filesize($file)), $bucketName, $filename, S3::ACL_PUBLIC_READ, array(), $contentType))
return true;
else
return false;
break;
}
return false;
}
/**
* AmazonS3Handler - convert()
*/
public function convert($file, $type)
{
$ffmpeg = "/usr/bin/ffmpeg";
$cmd['webm'] = $ffmpeg. " -i ". $file ." -vcodec libvpx -acodec libvorbis -ac 2 -f webm -g 30 2>&1";
$cmd['ogv'] = $ffmpeg. " -i ". $file ." -vcodec libtheora -acodec libvorbis -ac 2 2>&1";
$cmd['mp4'] = $ffmpeg. " -i ". $file ." -vcodec libx264 -acodec libfaac -ac 2 2>&1";
$cmd['jpg'] = $ffmpeg. " -i ". $file ." -vframes 30";
ob_start();
passthru($cmd[$type]);
$fileContents = ob_get_contents();
ob_end_clean();
return $fileContents;
}From my understanding, passthru should return raw output which could be picked up by the output buffer.
Am I doing something wrong ? Is there a better way to convert a video on my servers but keep the data on S3 ?
Thanks !
EDIT : I've boiled it down to the executing of the command. FFMPEG wasn't being located, so I changed the path to "/usr/bin/ffmpeg", no more error 127, now when I run exec() (not passthru) I get error 1
EDIT2 : Here is the output of running the script :
Array
(
[0] => ffmpeg version 0.7.3-4:0.7.3-0ubuntu0.11.10.1, Copyright (c) 2000-2011 the Libav developers
[1] => built on Jan 4 2012 16:08:51 with gcc 4.6.1
[2] => configuration: --extra-version='4:0.7.3-0ubuntu0.11.10.1' --arch=amd64 --prefix=/usr --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --enable-libvpx --enable-runtime-cpudetect --enable-vaapi --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --enable-shared --disable-static
[3] => libavutil 51. 7. 0 / 51. 7. 0
[4] => libavcodec 53. 6. 0 / 53. 6. 0
[5] => libavformat 53. 3. 0 / 53. 3. 0
[6] => libavdevice 53. 0. 0 / 53. 0. 0
[7] => libavfilter 2. 4. 0 / 2. 4. 0
[8] => libswscale 2. 0. 0 / 2. 0. 0
[9] => libpostproc 52. 0. 0 / 52. 0. 0
[10] =>
[11] => Seems stream 1 codec frame rate differs from container frame rate: 2000.00 (2000/1) -> 30.30 (1000/33)
[12] => Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'iv.m4v':
[13] => Metadata:
[14] => major_brand : M4V
[15] => minor_version : 1
[16] => compatible_brands: M4V M4A mp42isom
[17] => creation_time : 2012-01-23 04:00:18
[18] => encoder : Mac OS X v10.7.2 (CMA 889, CM 705.42, x86_64)
[19] => Duration: 00:00:11.70, start: 0.000000, bitrate: 345 kb/s
[20] => Stream #0.0(und): Audio: aac, 44100 Hz, stereo, s16, 125 kb/s
[21] => Metadata:
[22] => creation_time : 2012-01-23 04:00:18
[23] => Stream #0.1(und): Video: h264 (Main), yuv420p, 640x360 [PAR 1:1 DAR 16:9], 206 kb/s, 30.30 fps, 30.30 tbr, 1k tbn, 2k tbc
[24] => Metadata:
[25] => creation_time : 2012-01-23 04:00:18
[26] => iv.webm: Permission denied
)It looks like the permissions aren't set right, however the file has chmod 777 permissions. What other permissions need to get changed ?
-
ffmpeg stream chrome kiosk mode ubuntu 16.04 server
21 décembre 2016, par RaulI have a weird out-of-sync issue while using ffmpeg to stream to youtube live a chrome browser from an ub untu 16.04 server.
Issue : output video streamed to youtube has audio/video out of sync, sometimes with as much as 3s
Current flow :
1) start pulseaudio - we using something like this to start it :
pulseaudio --start -vvv --disallow-exit --log-target=syslog --high-priority --exit-idle-time=-1 --daemonize
2) start Xvfb
Xvfb :0 -ac -screen 0 1920x1080x24
3) start chrome linux in kiosk mode
google-chrome --kiosk --disable-gpu --incognito --no-first-run --disable-java --disable-plugins --disable-translate --disk-cache-size=$((1024 * 1024)) --disk-cache-dir=/tmp/chrome/ --user-data-dir=/tmp/chrome/ --force-device-scale-factor=1 --window-size=1920,1080 --window-position=0,0 LOCATION_URL
4) start ffmpeg
ffmpeg -y \
-thread_queue_size 8192 -rtbufsize 250M -f x11grab -video_size 1920x1080 -framerate 24 -i :0 \
-thread_queue_size 8192 -channel_layout stereo -f alsa -i pulse \
-c:v libx264 -pix_fmt yuv420p -c:v libx264 -g 48 -crf 24 -filter:v fps=24 -preset ultrafast -tune zerolatency \
-c:a aac -strict -2 -channel_layout stereo -ab 96k -ac 2 -flags +global_header \
-f flv YOUTUBE_LIVE_STREAMING_RTMPNote : this is running on an amazon ec2 instance, meaning there is no soundcard, so alsa and pulseaudio are creating a dummy audio card. However, the latency does not come from there. Logs :
Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Adjust latency mode enabled, configuring sink latency to half of overall latency.
Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Requested latency=23.22 ms, Received latency=23.22 ms
Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Final latency 69.66 ms = 23.22 ms + 2*11.61 ms + 23.22 msAt this point, here’s what we observed :
-
if we start ffmpeg exactly after issuing the command to start chrome, we see the DTS errors from ffmpeg. Audio is out of sync with the video and has delay of 3-5seconds AHEAD. We also noticed the out of sync remains the same for the full duration of the stream
-
if we start ffmpeg after around 10seconds, audio and video are almost in sync. We then manually added a -itsoffset -0.125 to the ffmpeg command and everything is perfect.
Questions :
- Why would ffmpeg have so much lag if it’s started right after chrome ?
- Is starting the ffmpeg after 10s or X seconds the expected behavior ? That is, is this because the system needs to wait for audio/video signals to be "ready" or something ?
- Is there a way to 100% calculate or know when Chrome is fully ready and start ffmpeg ? We found sometimes it takes 5s, sometimes 10. Depends on the URL we load.
- Besides the DTS error that ffmpeg throws, is there any other way to know if audio/video is out-of-sync ? as sometimes we have a delay of between 0.5 to 1s, but ffmpeg does not report anything. And a restart is required to "re-balance" the audio/video inputs and get them back in sync.
- Can pulseaudio be the problem in this scenario ?
Thank you
UPDATE Dec 20
We were able to do some tricks to force chrome to start the audio on page load, and that will force connect to pulseaudio. Doing that, plus adding a 3s delay for ffmpeg to start, there is no more delay when ffmpeg starts.
However, our app is a webRTC app, and we have a STRANGER thing happening : if we start the page with no webcam/audio, once the webcam/audio is enabled, ffmpeg (while showing no errors) has a delay of 2s or so. While keep talking, in about max 30s, that delay is GONE.So the new questions are :
- Besides the DTS error that ffmpeg throws, is there any other way to know if audio/video is out-of-sync ? as sometimes we have a delay of between 0.5 to 1s, but ffmpeg does not report anything.
- What could cause the initial audio/video out of sync issue and then catching up ?
-
-
ffmpeg stream chrome kiosk mode ubuntu 16.04 server
15 février 2021, par RaulI have a weird out-of-sync issue while using ffmpeg to stream to youtube live a chrome browser from an ub untu 16.04 server.



Issue : output video streamed to youtube has audio/video out of sync, sometimes with as much as 3s



Current flow :



1) start pulseaudio - we using something like this to start it :



pulseaudio --start -vvv --disallow-exit --log-target=syslog --high-priority --exit-idle-time=-1 --daemonize




2) start Xvfb



Xvfb :0 -ac -screen 0 1920x1080x24




3) start chrome linux in kiosk mode



google-chrome --kiosk --disable-gpu --incognito --no-first-run --disable-java --disable-plugins --disable-translate --disk-cache-size=$((1024 * 1024)) --disk-cache-dir=/tmp/chrome/ --user-data-dir=/tmp/chrome/ --force-device-scale-factor=1 --window-size=1920,1080 --window-position=0,0 LOCATION_URL




4) start ffmpeg



ffmpeg -y \
 -thread_queue_size 8192 -rtbufsize 250M -f x11grab -video_size 1920x1080 -framerate 24 -i :0 \
 -thread_queue_size 8192 -channel_layout stereo -f alsa -i pulse \
 -c:v libx264 -pix_fmt yuv420p -c:v libx264 -g 48 -crf 24 -filter:v fps=24 -preset ultrafast -tune zerolatency \
 -c:a aac -strict -2 -channel_layout stereo -ab 96k -ac 2 -flags +global_header \
 -f flv YOUTUBE_LIVE_STREAMING_RTMP




Note : this is running on an amazon ec2 instance, meaning there is no soundcard, so alsa and pulseaudio are creating a dummy audio card. However, the latency does not come from there. Logs :



Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Adjust latency mode enabled, configuring sink latency to half of overall latency.
Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Requested latency=23.22 ms, Received latency=23.22 ms
Nov 25 06:14:22 ip-172-31-29-8 pulseaudio[26602]: [pulseaudio] protocol-native.c: Final latency 69.66 ms = 23.22 ms + 2*11.61 ms + 23.22 ms




At this point, here's what we observed :



- 

-
if we start ffmpeg exactly after issuing the command to start chrome, we see the DTS errors from ffmpeg. Audio is out of sync with the video and has delay of 3-5seconds AHEAD. We also noticed the out of sync remains the same for the full duration of the stream
-
if we start ffmpeg after around 10seconds, audio and video are almost in sync. We then manually added a -itsoffset -0.125 to the ffmpeg command and everything is perfect.







Questions :



- 

- Why would ffmpeg have so much lag if it's started right after chrome ?
- Is starting the ffmpeg after 10s or X seconds the expected behavior ? That is, is this because the system needs to wait for audio/video signals to be "ready" or something ?
- Is there a way to 100% calculate or know when Chrome is fully ready and start ffmpeg ? We found sometimes it takes 5s, sometimes 10. Depends on the URL we load.
- Besides the DTS error that ffmpeg throws, is there any other way to know if audio/video is out-of-sync ? as sometimes we have a delay of between 0.5 to 1s, but ffmpeg does not report anything. And a restart is required to "re-balance" the audio/video inputs and get them back in sync.
- Can pulseaudio be the problem in this scenario ?













Thank you



UPDATE Dec 20



We were able to do some tricks to force chrome to start the audio on page load, and that will force connect to pulseaudio. Doing that, plus adding a 3s delay for ffmpeg to start, there is no more delay when ffmpeg starts.
However, our app is a webRTC app, and we have a STRANGER thing happening : if we start the page with no webcam/audio, once the webcam/audio is enabled, ffmpeg (while showing no errors) has a delay of 2s or so. While keep talking, in about max 30s, that delay is GONE.



So the new questions are :



- 

- Besides the DTS error that ffmpeg throws, is there any other way to know if audio/video is out-of-sync ? as sometimes we have a delay of between 0.5 to 1s, but ffmpeg does not report anything.
- What could cause the initial audio/video out of sync issue and then catching up ?






-