
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (79)
-
Organiser par catégorie
17 mai 2013, parDans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...) -
Récupération d’informations sur le site maître à l’installation d’une instance
26 novembre 2010, parUtilité
Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (3063)
-
Ffmpeg : 4K RGB->YUV realtime conversion
6 mars 2021, par andersdI'm trying to use Ffmpeg for creating a hevc realtime stream from a Decklink input. The goal is high quality HDR stream usage with 10 bits.
The Decklink SDI input is fed RGB 10 bits, which is well handled by ffmpeg with the decklink option -raw_format rgb10, which gets recognized by ffmpeg as 'gbrp10le'.


I have a Nvidia pascal-based card, which supports yuv444 10 bit (as 'yuv444p16le') and when when using '-c:v hevc_nvenc' the auto_scaler kicks in and converts to 'yuv444p16le', which I guess is the same conversion as giving '-pix_fmt yuv444p16le'.


This is working very well in 1920x1080 resolution, but in 4096x2160 resolution ffmpeg can't keep up realtime 24 or 25 fps, and I get input buffer overruns.
The culprit seems to be the RGB->YUV conversion in ffmpeg swscale because ;


- 

- When piping the Decklink 4K RGB input with '-c:v copy' straight to /dev/null, there's is no problems with buffer underruns,
- And when feeding the Decklink YUV and giving '-raw_format yuv422p10’ (no YUV444 input for decklink seems available for decklink in ffmpeg) I get no underrun and everything works well in 4K. Even if I set '-pix_fmt yuv444p16le'.






Any ideas how I could accomplish a 4K hevc in NVENC with the 10-bit RGB signal from the Decklink ? Is there a way to make NVENC accept and use the RGB data without first converting to YUV ? Or is there maybe a way to convert gbrp10le->yuv444p16le with cuda or scale_npp filter ? I have compiled ffmpeg with npp and cuda, but I cannot figure out if I can get it to work with RGB ? Whenever I try to do '-vf "hwupload_cuda"', auto_scaler kicks in and tries to convert to yuv on the cpu, which again creates underruns.


Another thing I guess could help is if there was a way to make the swscale cpu filter(or if there is another suitable filter ?) use multiple threads ? Right now it seems to only use one thread at a time, maxing out at 99% on my Ryzen 3950x (3,5GHz, 32 threads).


Example ffmpeg output :


$ ffmpeg -loglevel verbose -f decklink -raw_format rgb10 -i "Blackmagic Card 1" -c:v hevc_nvenc -preset medium -profile:v main10 -cbr 1 -b:v 20M -f nut - > /dev/null
--
Stream #0:1: Video: r210, 1 reference frame, gbrp10le(progressive), 4096x2160, 6635520 kb/s, 25 tbr, 1000k tbn, 1000k tbc
--
[graph 0 input from stream 0:1 @ 0x4166180] w:4096 h:2160 pixfmt:gbrp10le tb:1/1000000 fr:25000/1000 sar:0/1
[auto_scaler_0 @ 0x4168480] w:iw h:ih flags:'bicubic' interl:0
[format @ 0x4166080] auto-inserting filter 'auto_scaler_0' between the filter 'Parsed_null_0' and the filter 'format'
[auto_scaler_0 @ 0x4168480] w:4096 h:2160 fmt:gbrp10le sar:0/1 -> w:4096 h:2160 fmt:yuv444p16le sar:0/1 flags:0x4
[hevc_nvenc @ 0x4139640] Loaded Nvenc version 11.0
--
Stream #0:0: Video: hevc (Rext), 1 reference frame (HEVC / 0x43564548), yuv444p16le(tv, progressive), 4096x2160 (0x0), q=2-31, 2000 kb/s, 25 fps, 51200 tbn
--
[decklink @ 0x40f0900] Decklink input buffer overrun!:02.52 bitrate= 30471.3kbits/s speed=0.627x



-
Android - How can I pass camera stream to ffmpeg, using Camera2 library ?
29 mars 2021, par Juan José CetraroI am trying to create an app that shows the camera of the device on the screen, and also streams the camera by srt. To do this, I am using Camera2 library, and ffmpeg (in partucular I am using https://github.com/tanersener/mobile-ffmpeg, that is a ffmpeg wrapper for Android).


My plan is to get the camera stream using Camera2 (using the method onImageAvailable on ImageReader.OnImageAvailableListener class), and send this stream to udp ://localhost:1234. Then, I can use ffmpeg to get that stream by udp, and send it by srt.


I've already solved the part of sending the stream by srt using ffmpeg, and it works fine. In fact, if I set "android_camera" as the input of my ffmpeg command, my app works ok. The problem with this approach, is that if I do that, I block the access to the camera, so I can't show the camera on the screen with another library.


I also found a code that uses Camera2 to stream the camera by udp, and it works, but the problem with this code is that converts each frame to bitmap before sending it by udp, and it makes that it is not performant.


So, I need to know which is the best way to pass the data by udp to ffmpeg, so ffmpeg could process it and send it by srt ?


Camera2 let me to configure which format I want to receive the frames on my listener :


ImageReader.newInstance(1280, 720, ImageFormat.JPEG, /*maxImages*/2);



In this example I am setting JPEG as ImageFormat, but here I let you all the available formats I could use :


UNKNOWN, RGB_565, YV12, Y8, Y16, NV16, NV21, YUY2, JPEG, DEPTH_JPEG, YUV_420_888, YUV_422_888, YUV_444_888, FLEX_RGB_888, FLEX_RGBA_8888, RAW_SENSOR, RAW_PRIVATE, RAW10, RAW12, DEPTH16, DEPTH_POINT_CLOUD, RAW_DEPTH, PRIVATE, HEIC


This is the method where I am going to receive each frame, and what I need to know is what kind of transformation I have to do before sending the frame by udp to ffmpeg ? :


@Override
public void onImageAvailable(ImageReader reader) {

}



Thanks in advance for reading the question :)


-
Insert still frames into H.264 video stream
7 juillet 2021, par BassinatorI'm building an application that receives video packets which are encoded as H.264 from Microsoft Teams - I get one packet for each frame of video. Specifications of the packet contents are given here. For every packet I receive, I write the byte contents of the data[] buffer to a file. This resulting file is a playable H.264 encoded video.


I'm trying to handle the scenario of syncing the audio and video streams from a Teams meeting, and inserting a still frame PNG as a "filler" when nobody has their camera on.


I used the following FFMPEG command to generate n number of seconds of H.264 video from the filler frame :


ffmpeg -loop 1 -i video_filler_frame.png -framerate 30 -c:v libx264 -t 2 -vf scale=1920:1080 C:\Code\temp\out.mp4



This generates an MP4 file (H.264 encoded) - as a test in my code, I tried to read the contents of that generated file as a byte array and append them to the video file.


However, this doesn't appear to work. I'm guessing this is because there is some kind of header or other metadata that prevents us from doing the simple solution of just appending the bytes of the next frame.


My question is, how can I achieve what I am trying to do ? I'd like to splice in n number of frames as I am writing the individual packet contents to the file. In other words, for example, consider the following sequence :


- 

- Write packets of video to the file
- My code determines that filler frames are needed at some point in this process

- 

- Insert needed number of filler frames to the file




- Continue writing packets of video as they come in