
Recherche avancée
Autres articles (100)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...) -
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)
Sur d’autres sites (6609)
-
ffmpeg -hls_time option not working correctly when running as a service
4 février 2019, par aexpositoI am running a ffmpeg command as a systemd service to catch a live RTSP stream and generate hls chunks, the chunks are set to be 30 seconds long with the -hls_time option, when I run the command on the console myself it works ok, but when it runs from the service chunks, wish are supposed to be 30 seconds long are 7 or 8 seconds.
This is the command :/usr/bin/ffmpeg -rtsp_flags prefer_tcp -i
"rtsp ://192.168.1.16:554/user=admin&password=&channel=1&stream=1.sdp"
-acodec copy -vcodec copy -hls_time 30 -hls_list_size 10 -hls_flags append_list+delete_segments -f hls -use_localtime 1
-hls_segment_filename "/home/zurikato/video-backup/$FILENAME_FORMAT_hls.ts"
/home/zurikato/video-backup/playlist.m3u8I’m a beginner in ffmpeg and linux services, so please indulge me if it is a simple matter.
Thanks in advance -
FFmpeg command works locally but not on Azure Batch Service
10 août 2018, par ElgertI have a command that generates a video with background and text on it with FFmpeg and would like to render it using Azure Batch Service. Locally my command works :
./ffmpeg -f lavfi -i color=c=green:s=854x480:d=7 -vf "[in]drawtext=fontsize=46:fontcolor=White:text=dfdhjf dhjf dhjfh djfh djfh:x=(w-text_w)/2:y=((h-text_h)/2)-48,drawtext=fontsize=46:fontcolor=White:text= djfh djfh djfh djfh djf jdhfdjf hjdfh djfh jd fhdj:x=(w-text_w)/2:y=(h-text_h)/2,drawtext=fontsize=46:fontcolor=White:text=fh:x=(w-text_w)/2:y=((h-text_h)/2)+48[out]" -y StoryA.mp4
while the one generated programatically with C# and added as a task in batch service retursn failure :
cmd /c %AZ_BATCH_APP_PACKAGE_ffmpeg#3.4%\ffmpeg-3.4-win64-static\bin\ffmpeg -f lavfi -i color=c=green:s=854x480:d=7 -vf "[in]drawtext=fontsize=46:fontcolor=White:text=dfdhjf dhjf dhjfh djfh djfh:x=(w-text_w)/2:y=((h-text_h)/2)-48,drawtext=fontsize=46:fontcolor=White:text= djfh djfh djfh djfh djf jdhfdjf hjdfh djfh jd fhdj:x=(w-text_w)/2:y=(h-text_h)/2,drawtext=fontsize=46:fontcolor=White:text=fh:x=(w-text_w)/2:y=((h-text_h)/2)+48[out]" -y StoryA.mp4
The ffmpeg configuration works, and also the Pool as I’ve already tested it with simpler ffmpeg commands which had input and output files. This command doesnt have input file, maybe that is part of the problem ?
Thank you
-
Run 3 Docker images together as a single service
20 juin 2018, par kitceI want to run 3 Docker images as a single service. They are the official
nginx
,jrottenberg/ffmpeg
and a custom image.The custom image will return video files for HTTP requests on port 80, e.g.
http:////video.mp4
.I want to make the video files available for HLS in M3U8 playlist (or other better formats ?).
The main idea is as follows :
- Encode
video.mp4
, outputvideo.m3u8
and segment files with ffmpeg - Serve the
video.m3u8
and segment files with Nginx - The final and the only available web service of the container is
http:///.m3u8
. - Encode only when someone is requesting it (i.e. trigger the encoding when the first request comes, stop encoding and delete segment files when nobody requests it)
I tested the HLS part with ffmpeg and it works. I am just not sure about how to put Nginx and ffmpeg to work together.
- Encode