
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (75)
-
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...) -
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (4202)
-
h264 via WebRTC latency issue
18 septembre 2024, par LucasI am trying to send a video stream encoded with h264 (hardware accelerated with nvidia encoder) via WebRTC for low latency display on a browser.


More precisely, I have a thread that encodes an opengl framebuffer at a fixed frame rate, the resulting AVPacket's data (I encode using ffmpeg's C api) is then forwarded via WebRTC to the client (using aiortc)


The problem is that I observe significant delays, that seem to depend on the frame rate I use.
For example, running it locally, I get around 160ms delay when running at 30fps, and around 30ms when encoding at 90fps.


The delay here is the measured time to encode + transmit + decode, and I have the strong impression that the issue happen when presenting the video frame, like the browser is not immediately presenting the frame... (encoding is fast, I would expect the transmission to be also rather fast on a local setup, and decoding seems to be fine as well, as reported by the RTP stats in the browser).


I tried to play with RTP timestamps, but that did not change anything, the only variable that seems to impact the latency is the encoding thread 'frequency'.


Any idea on what could be creating this latency ? Am I missing a parameter ?


Also, here are the codec options I use : (they do not influence the latency that much from what I experimented)


profile = high
preset = llhq # low latency, high quality
tune = zerolatency
zerolatency = 1
g = 2 * FRAME_PER_SECOND # key frame every 2s
strict-gop = 1



UPDATE


I have the impression that the jitter buffer on Chrome's side is kind of preventing the rtp packets to be decoded immediately, is that possible ?


UPDATE 2


- 

- Using RTP
playout-delay
header extension slightly reduced the latency. - Setting
playoudDelayHint
in browser also seemed to help a bit






UPDATE 3


After further investigations, I came to the conclusion that it was not possible to get a lower latency by going through the standard webrtc for video streams, as there is little to no control on the video buffering, which I believe to be responsible of the observed latency.


On a side note, I tried to check how google stadia is doing it, as they seem to use WebRTC as well, but they use some in-house frameworks... (plus Chrome is the only supported browser)


- Using RTP
-
ffmpeg : How to replace a series of frames with a series of image files ?
14 septembre 2020, par Arnon WeinbergGiven a video file, start and end timestamps, there is a known number of frames between those timestamps in the video file, and I have an equal number of .png files in a directory to replace them with. The .png files are sorted as 001.png ... NNN.png. How would I go about updating the video file with the replacement frames using ffmpeg ?


This is a followup to using ffmpeg to replace a single frame based on timestamp, but I'm asking about replacing multiple sequential frames based on 2 timestamps.


Presumably something like :


ffmpeg -i input.mp4 -i %3d.png -filter_complex "something including the timestamps 4.40,5.20" -c:a copy output.mp4



I would also be okay with using frame numbers instead of timestamps if that makes things easier, and it's reasonable if start and end frames must be keyframes.


Background :


Many machine learning algorithms for video processing use ffmpeg to extract specific scenes from videos based on start and end timestamps, dump them into a sequence of .png files, process them in some way (denoise, deblur, colorize, annotate, inpainting, etc), and output the results into an equal number of .png files. The output frames are usually assembled into a new video file, but I would like instead to update the source video file so as to preserve audio, unedited video, and other video properties (fps, keyframes, etc).


This approach will not work as-is for some categories of video processing algorithms. For example, interpolation results in more frames than were originally extracted, and upscaling results in higher-resolution images. As such, I would appreciate an explanation of any solution so that I can adapt it for such cases (or I will ask separate questions for those).


-
How to encode a video from several images generated in a C++ program without writing the separate frame images to disk ?
5 mai 2021, par ksb496I am writing a C++ code where a sequence of N different frames is generated after performing some operations implemented therein. After each frame is completed, I write it on the disk as IMG_%d.png, and finally I encode them to a video through ffmpeg using the x264 codec.



The summarized pseudocode of the main part of the program is the following one :



std::vector<int> B(width*height*3);
for (i=0; i/ void generateframe(std::vector<int> &, int)
 generateframe(B, i); // Returns different images for different i values.
 sprintf(s, "IMG_%d.png", i+1);
 WriteToDisk(B, s); // void WriteToDisk(std::vector<int>, char[])
}
</int></int></int>



The problem of this implementation is that the number of desired frames, N, is usually high (N 100000) as well as the resolution of the pictures (1920x1080), resulting into an overload of the disk, producing write cycles of dozens of GB after each execution.



In order to avoid this, I have been trying to find documentation about parsing directly each image stored in the vector B to an encoder such as x264 (without having to write the intermediate image files to the disk). Albeit some interesting topics were found, none of them solved specifically what I exactly want to, as many of them concern the execution of the encoder with existing images files on the disk, whilst others provide solutions for other programming languages such as Python (here you can find a fully satisfactory solution for that platform).



The pseudocode of what I would like to obtain is something similar to this :



std::vector<int> B(width*height*3);
video_file=open_video("Generated_Video.mp4", ...[encoder options]...);
for (i=0; icode></int>



According to what I have read on related topics, the x264 C++ API might be able to do this, but, as stated above, I did not find a satisfactory answer for my specific question. I tried learning and using directly the ffmpeg source code, but both its low ease of use and compilation issues forced me to discard this possibility as a mere non-professional programmer I am (I take it as just as a hobby and unluckily I cannot waste that many time learning something so demanding).



Another possible solution that came to my mind is to find a way to call the ffmpeg binary file in the C++ code, and somehow manage to transfer the image data of each iteration (stored in B) to the encoder, letting the addition of each frame (that is, not "closing" the video file to write) until the last frame, so that more frames can be added until reaching the N-th one, where the video file will be "closed". In other words, call ffmpeg.exe through the C++ program to write the first frame to a video, but make the encoder "wait" for more frames. Then call again ffmpeg to add the second frame and make the encoder "wait" again for more frames, and so on until reaching the last frame, where the video will be finished. However, I do not know how to proceed or if it is actually possible.



Edit 1 :



As suggested in the replies, I have been documenting about named pipes and tried to use them in my code. First of all, it should be remarked that I am working with Cygwin, so my named pipes are created as they would be created under Linux. The modified pseudocode I used (including the corresponding system libraries) is the following one :



FILE *fd;
mkfifo("myfifo", 0666);

for (i=0; i/ void WriteToPipe(std::vector<int>, FILE *&fd)
 fflush(fd);
 fd=fclose("myfifo");
}
unlink("myfifo");
</int>



WriteToPipe is a slight modification of the previous WriteToFile function, where I made sure that the write buffer to send the image data is small enough to fit the pipe buffering limitations.



Then I compile and write the following command in the Cygwin terminal :



./myprogram | ffmpeg -i pipe:myfifo -c:v libx264 -preset slow -crf 20 Video.mp4




However, it remains stuck at the loop when i=0 at the "fopen" line (that is, the first fopen call). If I had not called ffmpeg it would be natural as the server (my program) would be waiting for a client program to connect to the "other side" of the pipe, but it is not the case. It looks like they cannot be connected through the pipe somehow, but I have not been able to find further documentation in order to overcome this issue. Any suggestion ?