
Recherche avancée
Autres articles (53)
-
MediaSPIP en mode privé (Intranet)
17 septembre 2013, parÀ partir de la version 0.3, un canal de MediaSPIP peut devenir privé, bloqué à toute personne non identifiée grâce au plugin "Intranet/extranet".
Le plugin Intranet/extranet, lorsqu’il est activé, permet de bloquer l’accès au canal à tout visiteur non identifié, l’empêchant d’accéder au contenu en le redirigeant systématiquement vers le formulaire d’identification.
Ce système peut être particulièrement utile pour certaines utilisations comme : Atelier de travail avec des enfants dont le contenu ne doit pas (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Support de tous types de médias
10 avril 2011Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)
Sur d’autres sites (4161)
-
Why is recording webcam with FFmpeg much faster with an input format ?
29 juin 2021, par AlterQuestion is the title


It's a basic question, but one that I can't find clearly answered in google or the documentation, which says :




If you specify the input format and device then ffmpeg can grab video and audio directly.




This isn't explicit enough for a beginner like me and raises other questions. What does ffmpeg do when the input format isn't specified and why is it so slow ? If ffmpeg can figure out the encoding, doesn't it just do so once ?


Examples


For completeness... I'm using an Rasperry Pi 4 with a Picamera2 to record 1080p


With input format, I get the full 30fps :
ffmpeg -input_format h264 -i /dev/video0 -codec h264_v4l2m2m test.h264


Without the input format, I get about 5fps :
ffmpeg -i /dev/video0 -codec h264_v4l2m2m test.h264


Note : it will run without the hardware acceleration option
-codec h264_v4l2m2m
but doesn't reach the full 30fps

Formats


In response to @llogan's comment, the output of
v4l2-ctl --list-formats-ext
is :

ioctl: VIDIOC_ENUM_FMT
 Type: Video Capture

 [0]: 'YU12' (Planar YUV 4:2:0)
 Size: Stepwise 32x32 - 2592x1944 with step 2/2
 [1]: 'YUYV' (YUYV 4:2:2)
 Size: Stepwise 32x32 - 2592x1944 with step 2/2
 [2]: 'RGB3' (24-bit RGB 8-8-8)
 Size: Stepwise 32x32 - 2592x1944 with step 2/2
 [3]: 'JPEG' (JFIF JPEG, compressed)
 Size: Stepwise 32x32 - 2592x1944 with step 2/2
 [4]: 'H264' (H.264, compressed)
 Size: Stepwise 32x32 - 2592x1944 with step 2/2
 [5]: 'MJPG' (Motion-JPEG, compressed)
 Size: Stepwise 32x32 - 2592x1944 with step 2/2
 [6]: 'YVYU' (YVYU 4:2:2)
 Size: Stepwise 32x32 - 2592x1944 with step 2/2
 [7]: 'VYUY' (VYUY 4:2:2)
 Size: Stepwise 32x32 - 2592x1944 with step 2/2
 [8]: 'UYVY' (UYVY 4:2:2)
 Size: Stepwise 32x32 - 2592x1944 with step 2/2
 [9]: 'NV12' (Y/CbCr 4:2:0)
 Size: Stepwise 32x32 - 2592x1944 with step 2/2
 [10]: 'BGR3' (24-bit BGR 8-8-8)
 Size: Stepwise 32x32 - 2592x1944 with step 2/2
 [11]: 'YV12' (Planar YVU 4:2:0)
 Size: Stepwise 32x32 - 2592x1944 with step 2/2
 [12]: 'NV21' (Y/CrCb 4:2:0)
 Size: Stepwise 32x32 - 2592x1944 with step 2/2
 [13]: 'RX24' (32-bit XBGR 8-8-8-8)
 Size: Stepwise 32x32 - 2592x1944 with step 2/2



-
How to stream an image during specific seconds with ffmpeg ?
14 juillet 2021, par Alex RypunI need to stream an image to RTMP destination for 20 seconds.
I use AWS medialive + mediapackage for streaming.
I'm trying to do it using different framerates (
-r
) by command :

ffmpeg -loop 1 -r 30 -t 20 -i ./stream_stub.jpg -vcodec libx264 -f flv rtmp://mystream



but the real stream time is from 8 to 14 seconds (depending on framerate).


If I create a video file the duration is 20 seconds (as expected) regardless of framerate :


ffmpeg -loop 1 -r 30 -t 20 -i ./stream_stub.jpg -vcodec libx264 -f flv out.mp4



But for live streaming, I can't reach the expected behavior.


What I'm doing wrong ? And what framerate should I use for a single still image (as I understand 1 fps should be ok) ?


-
AVFormatContext : interrupt callback proper usage ?
17 juillet 2021, par DanielAVFormatContext's
interrupt_callback
field is a



Custom interrupt callbacks for the I/O layer.




It's type is
AVIOInterruptCB
, and it explains in comment section :



Callback for checking whether to abort blocking functions.


AVERROR_EXIT is returned in this case by the interrupted function. During blocking operations, callback is called with opaque as parameter. If the callback returns 1, the blocking operation will be aborted.


No members can be added to this struct without a major bump, if new elements have been added after this struct in AVFormatContext or AVIOContext.




I have 2 questions :


- 

- what does the last section means ? Especially "without a major bump" ?
- If I use this along with an RTSP source, when I close the input by
avformat_close_input
, the "TEARDOWN" message is being sent out, however it won't reach the RTSP server.






For 2 : here is a quick pseudo-code for demo :


int pkts = 0;
bool early_exit = false;

int InterruptCallback(void* ctx) {
 return early_exit ? 1 : 0;
}

void main() {
 ctx = avformat_alloc_context
 ctx->interrupt_callback.callback = InterruptCallback;

 avformat_open_input
 avformat_find_stream_info
 pkts=0;
 while(!early_exit) {
 av_read_frame

 if (pkts++ > 100) early_exit=true;
 }

 avformat_close_input
}



In case I don't use the interrupt callback at all, TEARDOWN is being sent out, and it also reaches the RTSP server so it can actually tear down the connection. Otherwise, it won't tear down it, and I have to wait until TCP socket times out.


What is the proper way of using this interrupt callback ?