
Recherche avancée
Médias (91)
-
Géodiversité
9 septembre 2011, par ,
Mis à jour : Août 2018
Langue : français
Type : Texte
-
USGS Real-time Earthquakes
8 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
SWFUpload Process
6 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
-
Creativecommons informational flyer
16 mai 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (31)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (8667)
-
Stack AVFrame side by side (libav/ffmpeg)
22 février 2018, par dronemastersagaSo I am trying to combine two H264 livestreams of 1920x1080 resolution side-by-side to a livestream of 3840x1080 resolution.
For this, I can decode streams to AVFrames in libav/FFmpeg that I would like to combine into a bigger frame. The Input AVFrames : Two 1920x1080 frames in NV12 format (description : planar YUV 4:2:0, 12bpp, 1 plane for Y and 1 plane for the UV components, which are interleaved (first byte U and the following byte V))
The way I have figured out is with colorspace conversion (YUV to BGR) in libav, then to change it to OpenCV Mat, then to use hconcat in OpenCV to stack together, then colorspace conversion (BGR to YUV) in AVFormat.
Below is the method currently being used :
//Prior code is too long: Basically it decodes 2 streams to AVFrames frame1 and frame2 in a loop
sws_scale(swsContext, (const uint8_t *const *) frame1->data, frame1->linesize, 0, 1080, (uint8_t *const *) frameBGR1->data, frameBGR1->linesize);
sws_scale(swsContext, (const uint8_t *const *) frame2->data, frame2->linesize, 0, 1080, (uint8_t *const *) frameBGR2->data, frameBGR2->linesize);
Mat matFrame1(1080, 1920, CV_8UC3, frameBGR1->data[0], (size_t) frameBGR1->linesize[0]);
Mat matFrame2(1080, 1920, CV_8UC3, frameBGR2->data[0], (size_t) frameBGR2->linesize[0]);
Mat fullFrame;
hconcat(matFrame1, matFrame2, fullFrame);
const int stride[] = { static_cast<int>(fullFrame.step[0]) };
sws_scale(modifyContext, (const uint8_t * const *)&fullFrame.data, stride, 0, fullFrame.rows, newFrame->data, newFrame->linesize);
//From here, newFrame is sent to the encoder
</int>The resulting image is satisfactory but it does lose quality in colorspace conversion. However this method is too slow to use (I’m at 15 fps and I need 30). Is there a way to stack AVFrames directly without colorspace conversion ? Or is there any better way to do this ? I searched a lot about this and I couldn’t find any solution to this. Please advise.
-
Bash script : automate ffmpeg encoding for mpeg-dash
13 février 2018, par Massimo VantaggioI’m writing a bash file to create video encoding and concatenation for dash live streaming use,
Basically it read an input video folder, encodes all videos into three resolution formats, after that it concatening them to create three adaption sets.DIAGRAM :
This script checks for fps conformance,
force/scaling resolution if the input is not 1920 x 1080p,
Insert the channel logo png,
Cut the end of all videos input in order to make them finish with closed gop, this to ensure that there are not videos with audio and video track with different lenght.
ISSUE :
Actually I’m not sure that the concatenation process respects the closed GOP alignment as it after the encoding..
I try to cut also the end of the concatenation result in order to make it finish without decimals on a closed gop too, but im unable to erase all decimals from the total duration :total duration in seconds: 826.795000
total duration corrected in seconds: 826But the real duration measured by ffprobe is
824.044000
I try to check keyframes alignment with mp4box has they teach without any result :
MP4Box -info TRACK_ID source1.mp4 2>&1 | grep GOP
This is the first time that i work with "video scripts" and probably i don’t know what input to give for TRACK_ID
BASH SCRIPT :
#!/bin/bash
#CANCAT 0.2
cd input
times=()
fps=()
for f in *.mp4; do
_t=$(ffprobe -i "$f" -show_entries format=duration -v quiet -of csv="p=0")
times+=("$_t")
_f=$(ffmpeg -i "$f" 2>&1 | sed -n "s/.*, \\(.*\\) fp.*/\\1/p")
fps+=("$_f")
done
#SUM ALL DURATIONS
TOTALDURATION=$( echo "${times[@]}" | sed 's/ /+/g' | bc )
#DELETE DECIMAL
DURROUND=$(echo "$TOTALDURATION" | cut -d'.' -f1)
#GET REST OF DIVISION BY 2 AS GOP
TOTDELTA="$((DURROUND%2))"
#SUBTRACT DELTA FROM TOTAL DURATION
TOTDUR="$(($DURROUND-$TOTDELTA))"
#GET NUMBER OF ELEMENTS IN FPS ARRAY
tLen=${#fps[@]}
#CHECK FPS EQUALITY
for tLen in "${fps[@]:1}"; do
if [[ $tLen != ${fps[0]} ]]; then
printf "WARNING: VIDEO’S FRAME-RATE ARE NOT EQUALS, THE PROCESS CAN’T START."
printf "%s\\0" "${fps[@]}" |
sort -zu |
xargs -0 printf " %s"
printf "\\n"
exit 1
fi
done
for f in *.mp4; do
#GET DURATION OF EACH VIDEO
DUR="$(ffprobe -i "$f" -show_entries format=duration -v quiet -of csv="p=0")"
DUR=$(echo "$DUR" | cut -d'.' -f1) # DELETE DECIMAL
#GET FPS OF EACH VIDEO
FPS="$(ffmpeg -i "$f" 2>&1 | sed -n "s/.*, \(.*\) fp.*/\1/p")"
#ROUND FPS OF EACH VIDEO
FPSC=$( echo "($FPS+0.5)/1" | bc )
#REMOVE EXTENSION FROM VIDEO FILE NAME
NAME=$(echo "$f" | cut -d'.' -f1)
#GET GOP
GOP="$((FPSC*2))"
DELTADUR="$((DUR%2))"
DUR="$(($DUR-$DELTADUR))"
#ENCODE 1080p
ffmpeg -y -i "$f" -i ../logo/logo.png -c:a aac -b:a 384k -ar 48000 -ac 2 -async 1 -c:v libx264 -x264opts keyint=$GOP:min-keyint=$GOP:no-scenecut -bf 0 -r $FPSC -b:v 4800k -maxrate 4800k -bufsize 3000k -profile:v main -crf 22 -t $DUR -filter_complex "[0:v][1:v]overlay=main_w-overlay_w-10:10,scale=1920:1080,setsar=1" ../buffer/${NAME}-1080.mp4
#ENCODE 720p
ffmpeg -y -i ../buffer/${NAME}-1080.mp4 -c:a aac -b:a 256k -ar 48000 -ac 2 -async 1 -c:v libx264 -x264opts keyint=$GOP:min-keyint=$GOP:no-scenecut -bf 0 -s 1280x720 -r $FPSC -b:v 2400k -maxrate 2400k -bufsize 1500k -profile:v main -crf 22 -t $DUR ../buffer/${NAME}-720.mp4
#ENCODE 360p
ffmpeg -y -i ../buffer/${NAME}-720.mp4 -c:a aac -b:a 128k -ar 48000 -ac 2 -async 1 -c:v libx264 -x264opts keyint=$GOP:min-keyint=$GOP:no-scenecut -bf 0 -s 640x360 -r $FPSC -b:v 800k -maxrate 800k -bufsize 500k -profile:v main -crf 22 -t $DUR ../buffer/${NAME}-360.mp4
done
#enter in buffer
cd ..
cd buffer
#CONCAT 1080 SET
# with a bash for loop
for f in ./*1080.mp4; do echo "file '$f'" >> 1080list.txt; done
ffmpeg -f concat -safe 0 -i 1080list.txt -t $TOTDUR -c copy ../output/1080set.mp4
#CONCAT 720 SET
# with a bash for loop
for f in ./*720.mp4; do echo "file '$f'" >> 720list.txt; done
ffmpeg -f concat -safe 0 -i 720list.txt -t $TOTDUR -c copy ../output/720set.mp4
#CONCAT 360 SET
# with a bash for loop
for f in ./*360.mp4; do echo "file '$f'" >> 360list.txt; done
ffmpeg -f concat -safe 0 -i 360list.txt -t $TOTDUR -c copy ../output/360set.mp4
#CLEAN BUFFER
rm *.mp4
rm *.txt
echo "CONCAT COMPLETED:"
echo "frame-rate: $fps"
echo "total duration in seconds: $TOTALDURATION"
echo "total duration corrected in seconds: $TOTDUR"The full file with relative folders :
There is someone who can help me to understand why I can not eliminate the decimals of the total duration during concat ?
And how to check overall keyframes allignment ?
Also any impovement that i ignore is welcome !Thanks a lot !
Massimo
-
Why is ffmpeg trying to produce both h.264 as well as h.265 ?
7 mars 2018, par hydra3333I think I only ask for h.265 output, but the output loge below seems to indicate it is trying to produce 2 video streams as output.
"C:\SOFTWARE\ffmpeg\0-homebuilt-x64\built_for_generic_opencl\x64_8bit\ffmpeg.exe" -hide_banner -v verbose -threads 0 -i "G:\HDTV\0nvencc\test-mp4-03\ABC HD interlaced.aac.mp4" -t 15 -threads 0 -an -sws_flags lanczos+accurate_rnd+full_chroma_int+full_chroma_inp -filter_complex "[0:v]yadif=0:0:0" -pixel_format yuv420p -pix_fmt yuv420p -strict -1 -f yuv4mpegpipe - 2> .\zzz1.h265.txt
| "C:\SOFTWARE\ffmpeg\0-homebuilt-x64\built_for_generic_opencl\x64_8bit\ffmpeg.exe" -strict -1 -hide_banner -v verbose -threads 0 -i - -strict -1 -c:v:0 libx265 -crf 28 output.mp4 -an -y .\zzz.h265.mp4 2> .\zzz2.h265.txt
-----------------------------
<snip to="to" make="make" code="code" block="block" smaller="smaller">
Stream mapping:
Stream #0:0 (h264) -> yadif
yadif -> Stream #0:0 (wrapped_avframe)
Press [q] to stop, [?] for help
[h264 @ 000001b78ca87c00] Reinit context to 1920x1088, pix_fmt: yuv420p
[graph 0 input from stream 0:0 @ 000001b78ca264c0] w:1920 h:1080 pixfmt:yuv420p tb:1/90000 fr:25/1 sar:1/1 sws_param:flags=2
</snip>Output file #0 (pipe:):
Output stream #0:0 (video): 2 frames encoded; 2 packets muxed (1072 bytes);
Total: 2 packets (1072 bytes) muxed
Conversion failed!
-----------------------------
Routing option strict to both codec and muxer layer
Last message repeated 1 times
Input #0, yuv4mpegpipe, from 'pipe:':
Duration: N/A, start: 0.000000, bitrate: N/A
Stream #0:0: Video: rawvideo, 1 reference frame (I420 / 0x30323449), yuv420p(progressive, left), 1920x1080, SAR 1:1 DAR 16:9, 25 fps, 25 tbr, 25 tbn, 25 tbc
Stream mapping:
Stream #0:0 -> #0:0 (rawvideo (native) -> hevc (libx265))
Stream #0:0 -> #1:0 (rawvideo (native) -> h264 (libx264))
[graph 0 input from stream 0:0 @ 0000027f5ebefa00] w:1920 h:1080 pixfmt:yuv420p tb:1/25 fr:25/1 sar:1/1 sws_param:flags=2
x265 [info]: HEVC encoder version 2.7613d9f443769
x265 [info]: build info [Windows][GCC 7.3.0][64 bit] 8bit
x265 [info]: using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
x265 [info]: Main profile, Level-4 (Main tier)
x265 [info]: Thread pool created using 8 threads
x265 [info]: Slices : 1
x265 [info]: frame threads / pool features : 3 / wpp(17 rows)
-----------------------------