
Recherche avancée
Autres articles (37)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (5240)
-
Any advice on streaming av1 with gstreamer to mediamtx and webrtcbin ?
8 février 2024, par Israel RobotnickI have gstreamer 1.22.7 with the RS plugin for av1 support.
I'm trying to stream AV1 rtp to mediamtx using gstreamer, but the bigger goal is that my rtspsrc->webrtcbin pipeline will work with av1 as it works with h264\vp8\vp9.


I have gstreamer 1.22.7 with the RS plugin for av1 support.
I've created a few av1 files with ffmpeg using svtav1 and rav1e encoders :


ffmpeg -i h264.mp4 -an -c:v libsvtav1 -preset 5 -crf 30 -g 60 -svtav1-params tune=0:fast-decode=1 -pix_fmt yuv420p test1.mp4

ffmpeg -i h264.mp4 -an -c:v librav1e -preset 5 -crf 30 -g 60 -rav1e-params speed=5:low_latency=true -pix_fmt yuv420p test2.mp4



ffmpeg does not currently support AV1 streaming to rtp\rtsp, so im using gstreamer to do so :


gst-launch-1.0 filesrc location=test1.mp4 ! qtdemux ! av1parse ! rtspclietsink location=rtsp://127.0.0.1:8554/test1



From what I've read, mediaMTX\chrome\VLC in their latest versions support av1 streaming in webrtc\rtsp,
but there are no examples whatsoever on how to do so.


Gstreamer preroll, playing and recording when publishing. Everything seems to be fine. Same in mediamtx logs.


When I try to connect a client to the rtsp path via VLC\FFplay\gstreamer rtspsrc->webrtcbin pipeline I don't
et any image. (though webrtc internals show packets arrive fine, but VLC\ffmpeg cant connect)


Any ideas what can be wrong ? Anyone have experience with encoding+streaming AV1 with gstreamer rtspclientsink ?
If you have any tips on redirecting it to webrtcbin (what I do is rtspsrc...parsebin ! queue ! rtpav1pay ! webrtcbin, which seems to connect to chrome and create the av1 sdp, but there in no image) I would appreciate them as well ( :


-
Recommendations for real-time pixel-level analysis of television (TV) video
6 décembre 2011, par Randall Cook[Note : This is a rewrite of an earlier question that was considered inappropriate and closed.]
I need to do some pixel-level analysis of television (TV) video. The exact nature of this analysis is not pertinent, but it basically involves looking at every pixel of every frame of TV video, starting from an MPEG-2 transport stream. The host platform will be server-class, multiprocessor 64-bit Linux machines.
I need a library that can handle the decoding of the transport stream and present me with the image data in real-time. OpenCV and ffmpeg are two libraries that I am considering for this work. OpenCV is appealing because I have heard it has easy to use APIs and rich image analysis support, but I have no experience using it. I have used ffmpeg in the past for extracting video frame data from files for analysis, but it lacks image analysis support (though Intel's IPP can supplement).
In addition to general recommendations for approaches to this problem (excluding the actual image analysis), I have some more specific questions that would help me get started :
- Are ffmpeg or OpenCV commonly used in industry as a foundation for real-time
video analysis, or is there something else I should be looking at ? - Can OpenCV decode video frames in real time, and still leave enough
CPU left over to do nontrivial image analysis, also in real-time ? - Is sufficient to use ffpmeg for MPEG-2 transport stream decoding, or
is it preferable to just use an MPEG-2 decoding library directly (and if so, which one) ? - Are there particular pixel formats for the output frames that ffmpeg
or OpenCV is particularly efficient at producing (like RGB, YUV, or YUV422, etc) ?
- Are ffmpeg or OpenCV commonly used in industry as a foundation for real-time
-
ffmpeg combine audio mix code into complex concate script
15 mai 2020, par sisterbrotherI got currently 2 different ffmpeg scripts which I want to combine. I do not have good ffmpeg experience and those codes are mostly googel code so please be patient with me



The first code is concating 3 videos :



ffmpeg -y -i "$vid1" -i "$fp" -i "$vid1" -filter_complex \
"[0:v]scale=$cResolution:force_original_aspect_ratio=decrease,pad=$cResolution:(ow-iw)/2:(oh-ih)/2,setsar=1,fps=30,format=yuv420p[v0]; \
 [1:v]scale=$cResolution:force_original_aspect_ratio=decrease,pad=$cResolution:(ow-iw)/2:(oh-ih)/2,setsar=1,fps=30,format=yuv420p[v1]; \
 [2:v]scale=$cResolution:force_original_aspect_ratio=decrease,pad=$cResolution:(ow-iw)/2:(oh-ih)/2,setsar=1,fps=30,format=yuv420p[v2]; \
 [0:a]aformat=sample_rates=48000:channel_layouts=stereo[a0]; \
 [1:a]aformat=sample_rates=48000:channel_layouts=stereo[a1]; \
 [2:a]aformat=sample_rates=48000:channel_layouts=stereo[a2]; \
 [v0][a0][v1][a1][v2][a2]concat=n=3:v=1:a=1[v][a]; \
 [v]drawtext=text='example..':y=h-line_h-$h3:x=w/30*mod(t\,20):enable='gt(mod(t,$dr2),$Introdr_rounded)'[v]; \
 [v]drawtext=text='example..':y=h-line_h-$hcentral:x=w/20*mod(t\,100):enable='gt(mod(t,$dr2),$Introdr_rounded)'[v]; \
 [v]drawtext=text='example..':y=h-line_h-23:x=w/30*mod(t\,20):enable='gt(mod(t,$dr2),$Introdr_rounded)'[v]" \
 -map "[v]" -map "[a]" -c:v libx264 -crf 22 -preset veryfast -c:a aac -movflags +faststart "$fp_dest"




The second code is overlay a background mp3 in endless loop to the created video from above. Its important to know that this code does overlap the audio of the video and does not replace it. In future I will lower the volume of the mp3 files to work as background music



ffmpeg -y -i "$fp_dest" -filter_complex "amovie=$audio:loop=0,asetpts=N/SR/TB[aud];[0:a][aud]amix[a]" -map 0:v -map '[a]' -c:v copy -c:a aac -b:a 256k -shortest ./test.mp4





So currently I got 2 steps which I want to combine into 1 step. Can you please help me to include the second code into the first one without change any logic of the code ?