
Recherche avancée
Autres articles (19)
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Configuration spécifique pour PHP5
4 février 2011, parPHP5 est obligatoire, vous pouvez l’installer en suivant ce tutoriel spécifique.
Il est recommandé dans un premier temps de désactiver le safe_mode, cependant, s’il est correctement configuré et que les binaires nécessaires sont accessibles, MediaSPIP devrait fonctionner correctement avec le safe_mode activé.
Modules spécifiques
Il est nécessaire d’installer certains modules PHP spécifiques, via le gestionnaire de paquet de votre distribution ou manuellement : php5-mysql pour la connectivité avec la (...) -
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)
Sur d’autres sites (3994)
-
How to enable LHLS in FFMPEG 4.1 ?
27 décembre 2020, par mehdi.rI am trying to create a low latency CMAF video stream using FFMPEG.
To do so, I would like to enable the
lhls
option in FFMPEG in order to have the#EXT-X-PREFETCH
tag written in the HLS manifest.


From the FFMPEG doc :



https://www.ffmpeg.org/ffmpeg-all.html





Enable Low-latency HLS(LHLS). Adds #EXT-X-PREFETCH tag with current >segment’s URI. Apple doesn’t have an official spec for LHLS. Meanwhile >hls.js player folks are trying to standardize a open LHLS spec. The >draft spec is available in https://github.com/video-dev/hlsjs->rfcs/blob/lhls-spec/proposals/0001-lhls.md This option will also try >to comply with the above open spec, till Apple’s spec officially >supports it. Applicable only when streaming and hls_playlist options >are enabled. This is an experimental feature.





I am using the following command with FFMPEG 4.1 :



ffmpeg -re -i ~/Documents/videos/BigBuckBunny.mp4 \
 -map 0 -map 0 -map 0 -c:a aac -c:v libx264 -tune zerolatency \
 -b:v:0 2000k -s:v:0 1280x720 -profile:v:0 high \
 -b:v:1 1500k -s:v:1 640x340 -profile:v:1 main \
 -b:v:2 500k -s:v:2 320x170 -profile:v:2 baseline \
 -bf 1 \
 -keyint_min 24 -g 24 -sc_threshold 0 -b_strategy 0 -ar:a:1 22050 -use_timeline 1 -use_template 1 \
 -window_size 5 -adaptation_sets "id=0,streams=v id=1,streams=a" \
 -hls_playlist 1 -seg_duration 1 -streaming 1 -strict experimental -lhls 1 -remove_at_exit 1 \
 -f dash manifest.mpd





The kind of HLS manifest I obtained for a specific resolution :



#EXTM3U
#EXT-X-VERSION:6
#EXT-X-TARGETDURATION:1
#EXT-X-MEDIA-SEQUENCE:8
#EXT-X-MAP:URI="init-stream0.mp4"
#EXTINF:0.998458,
#EXT-X-PROGRAM-DATE-TIME:2019-06-21T18:13:56.966+0900
chunk-stream0-00008.mp4
#EXTINF:0.998458,
#EXT-X-PROGRAM-DATE-TIME:2019-06-21T18:13:57.964+0900
chunk-stream0-00009.mp4
#EXTINF:0.998458,
#EXT-X-PROGRAM-DATE-TIME:2019-06-21T18:13:58.963+0900
chunk-stream0-00010.mp4
#EXTINF:0.998458,
#EXT-X-PROGRAM-DATE-TIME:2019-06-21T18:13:59.961+0900
chunk-stream0-00011.mp4
#EXTINF:1.021678,
#EXT-X-PROGRAM-DATE-TIME:2019-06-21T18:14:00.960+0900
chunk-stream0-00012.mp4
...





As you can see the
#EXT-X-PREFETCH
tag is missing.


Any help would be highly appreciated.



Edit



I also compiled FFmpeg from its master branch by doing the following :



nasm



sudo apt-get install nasm mingw-w64




Codecs



sudo apt-get install libx265-dev libnuma-dev libx264-dev libvpx-dev libfdk-aac-dev libmp3lame-dev libopus-dev




FFmpeg



mkdir lhls
cd lhls 
git init 
git clone https://github.com/FFmpeg/FFmpeg.git
cd FFmpeg 
git checkout master




AOM (inside FFmpeg dir)



git -C aom pull 2> /dev/null || git clone --depth 1 https://aomedia.googlesource.com/aom && \
mkdir -p aom_build && \
cd aom_build && \
PATH="$HOME/bin:$PATH" cmake -G "Unix Makefiles" -DCMAKE_INSTALL_PREFIX="$HOME/ffmpeg_build" -DENABLE_SHARED=off -DENABLE_NASM=on ../aom && \
PATH="$HOME/bin:$PATH" make && \
make install
cd..




Compiling



PATH="$HOME/bin:$PATH" PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig" ./configure \
 --prefix="$HOME/ffmpeg_build" \
 --pkg-config-flags="--static" \
 --extra-cflags="-I$HOME/ffmpeg_build/include" \
 --extra-ldflags="-L$HOME/ffmpeg_build/lib" \
 --extra-libs="-lpthread -lm" \
 --bindir="$HOME/bin" \
 --enable-gpl \
 --enable-libaom \
 --enable-libass \
 --enable-libfdk-aac \
 --enable-libfreetype \
 --enable-libmp3lame \
 --enable-libopus \
 --enable-libvorbis \
 --enable-libvpx \
 --enable-libx264 \
 --enable-libx265 \
 --enable-nonfree && \
PATH="$HOME/bin:$PATH" make 




Unfortunately, the
#EXT-X-PREFETCH
is still missing in the HLS Manifest.


I also tried nightly builds from https://ffmpeg.zeranoe.com/builds/ , same result.



Any help would be highly appreciated.



EDIT 2 :resolved



Thanks to @aergistal and @Gyan , the
#EXT-X-PREFETCH
tag is now present in my HLS manifest.


Here the FFMPEG command I am using :



./ffmpeg -re -i ~/videos/BigBuckBunny.mp4 -loglevel debug \
 -map 0 -map 0 -map 0 -c:a aac -c:v libx264 -tune zerolatency \
 -b:v:0 2000k -s:v:0 1280x720 -profile:v:0 high -b:v:1 1500k -s:v:1 640x340 -profile:v:1 main -b:v:2 500k -s:v:2 320x170 -profile:v:2 baseline -bf 1 \
 -keyint_min 24 -g 24 -sc_threshold 0 -b_strategy 0 -ar:a:1 22050 -use_timeline 1 -use_template 1 -window_size 5 \
 -adaptation_sets "id=0,streams=v id=1,streams=a" -hls_playlist 1 -seg_duration 3 -streaming 1 \
 -strict experimental -lhls 1 -remove_at_exit 0 -master_m3u8_publish_rate 3 \
 -f dash -method PUT -http_persistent 1 https://example.com/manifest.mpd




Apparently the mime types are not passed to the server & FFmpeg seems to ignore the
-headers
option.

-
Trying to use ffmpeg to create slideshow from ISO-8601 named pictures. Getting output with no playable streams
19 juin 2019, par Robert EllegateI’m trying to create a slideshow of images that are irregular in dimension/orientation but all named with the same ISO-8601 date format.
I’ve normalized the filenames so they are all YYYYMMDD.jpg. I have tried using the globular pattern type for ffmpeg and various methods for inputting the files, including piping the concatenation of the files into ffmpeg.
Here are the images I’m trying to use :
$ ls *.jpg | xargs -n1 file
20190411.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=upper-left, width=0], baseline, precision 8, 10128x3984, components 3
20190417.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=lower-right, width=0], baseline, precision 8, 10176x3952, components 3
20190424.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=upper-left, width=0], baseline, precision 8, 12128x3840, components 3
20190429.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=upper-left, width=0], baseline, precision 8, 11104x3888, components 3
20190430.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=lower-right, width=0], baseline, precision 8, 10992x3920, components 3
20190501.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=lower-right, width=0], baseline, precision 8, 10528x3936, components 3
20190502.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=lower-right, width=0], baseline, precision 8, 10992x3792, components 3
20190508.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=lower-right, width=0], baseline, precision 8, 11008x3808, components 3
20190515.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=lower-right, width=0], baseline, precision 8, 10416x3760, components 3
20190516.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=lower-right, width=0], baseline, precision 8, 10928x3760, components 3
20190517.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=lower-right, width=0], baseline, precision 8, 10720x3840, components 3
20190522.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=[*0*], width=0], baseline, precision 8, 6552x1688, components 3
20190523.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=[*0*], width=0], baseline, precision 8, 6572x1700, components 3
20190524.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=[*0*], width=0], baseline, precision 8, 6468x1659, components 3
20190528.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=[*0*], width=0], baseline, precision 8, 5424x1644, components 3
20190529.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=7, model=Pixel 2 XL, height=0, manufacturer=Google, orientation=[*0*], datetime=2019:05:29 16:38:01, width=0]
20190531.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=[*0*], width=0], baseline, precision 8, 6584x1693, components 3
20190603.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=[*0*], width=0], baseline, precision 8, 6536x1690, components 3
20190604.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=[*0*], width=0], baseline, precision 8, 5748x1618, components 3
20190606.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=[*0*], width=0], baseline, precision 8, 6196x1690, components 3
20190607.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=[*0*], width=0], baseline, precision 8, 6112x1674, components 3
20190610.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=[*0*], width=0], baseline, precision 8, 6440x1670, components 3
20190611.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=[*0*], width=0], baseline, precision 8, 6312x1694, components 3
20190612.jpg: JPEG image data, Exif standard: [TIFF image data, big-endian, direntries=4, height=0, orientation=[*0*], width=0], baseline, precision 8, 6176x1689, components 3And these are the various ffmpeg commands I’ve tried using :
cat *.jpg | ffmpeg -framerate 1/5 -c:v libx264 -r 30 -pix_fmt yuv420p out.mp4
cat *.jpg | ffmpeg -f image2pipe -i - output.mkv
ffmpeg -framerate 1/5 -pattern_type glob -i '*.jpg' out.mp4
ffmpeg -framerate 1/5 -pattern_type glob -i '*.jpg' -c:v libx264 -vf fps=25 -pix_fmt yuv420p out.mp4
I’m trying to create a video that shows each image for 5 seconds in order, but I’m getting a mp4 video file with no playable streams.
-
tools/python : add script to convert TensorFlow model (.pb) to native model (.model)
13 juin 2019, par Guo, Yejuntools/python : add script to convert TensorFlow model (.pb) to native model (.model)
For example, given TensorFlow model file espcn.pb,
to generate native model file espcn.model, just run :
python convert.py espcn.pbIn current implementation, the native model file is generated for
specific dnn network with hard-code python scripts maintained out of ffmpeg.
For example, srcnn network used by vf_sr is generated with
https://github.com/HighVoltageRocknRoll/sr/blob/master/generate_header_and_model.py#L85In this patch, the script is designed as a general solution which
converts general TensorFlow model .pb file into .model file. The script
now has some tricky to be compatible with current implemention, will
be refined step by step.The script is also added into ffmpeg source tree. It is expected there
will be many more patches and community needs the ownership of it.Another technical direction is to do the conversion in c/c++ code within
ffmpeg source tree. While .pb file is organized with protocol buffers,
it is not easy to do such work with tiny c/c++ code, see more discussion
at http://ffmpeg.org/pipermail/ffmpeg-devel/2019-May/244496.html. So,
choose the python script.Signed-off-by : Guo, Yejun <yejun.guo@intel.com>