
Recherche avancée
Médias (91)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
-
Stereo master soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Audio
-
Elephants Dream - Cover of the soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (72)
-
Demande de création d’un canal
12 mars 2010, parEn fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
(Dés)Activation de fonctionnalités (plugins)
18 février 2011, parPour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)
Sur d’autres sites (7237)
-
Transcoding fMP4 to HLS while writing on iOS using FFmpeg
29 avril 2017, par bclymerTL ;DR
I want to convert fMP4 fragments to TS segments (for HLS) as the fragments are being written using FFmpeg on an iOS device.
Why ?
I’m trying to achieve live uploading on iOS while maintaining a seamless, HD copy locally.
What I’ve tried
-
Rolling
AVAssetWriter
s where each writes for 8 seconds, then concating the MP4s together via FFmpeg.What went wrong - There are blips in the audio and video at times. I’ve identified 3 reasons for this.
1) Priming frames for audio written by the AAC encoder creating gaps.
2) Since video frames are 33.33ms long, and audio frames 0.022ms long, it’s possible for them to not line up at the end of a file.
3) The lack of frame accurate encoding present on Mac OS, but not available for iOS Details Here
-
FFmpeg muxing a large video only MP4 file with raw audio into TS segments. The work was based off the Kickflip SDK
What Went Wrong - Every once in a while an audio only file would get uploaded, with no video whatsoever. Never able to reproduce it in-house, but it was pretty upsetting to our users when they didn’t record what they thought they did. There were also issues with accurate seeking on the final segments, almost like the TS segments were incorrectly time stamped.
What I’m thinking now
Apple was pushing fMP4 at WWDC this year (2016) and I hadn’t looked into it much at all before that. Since an fMP4 file can be read, and played while it’s being written, I thought that it would be possible for FFmpeg to transcode the file as it’s being written as well, as long as we hold off sending the bytes to FFmpeg until each fragment within the file is finished.
However, I’m not familiar enough with the FFmpeg C API, I only used it briefly within attempt #2.
What I need from you
- Is this a feasible solution ? Is anybody familiar enough with fMP4 to know if I can actually accomplish this ?
- How will I know that
AVFoundation
has finished writing a fragment within the file so that I can pipe it into FFmpeg ? - How can I take data from a file on disk, chunk at a time, pass it into FFmpeg and have it spit out TS segments ?
-
-
h264 via WebRTC latency issue
18 septembre 2024, par LucasI am trying to send a video stream encoded with h264 (hardware accelerated with nvidia encoder) via WebRTC for low latency display on a browser.


More precisely, I have a thread that encodes an opengl framebuffer at a fixed frame rate, the resulting AVPacket's data (I encode using ffmpeg's C api) is then forwarded via WebRTC to the client (using aiortc)


The problem is that I observe significant delays, that seem to depend on the frame rate I use.
For example, running it locally, I get around 160ms delay when running at 30fps, and around 30ms when encoding at 90fps.


The delay here is the measured time to encode + transmit + decode, and I have the strong impression that the issue happen when presenting the video frame, like the browser is not immediately presenting the frame... (encoding is fast, I would expect the transmission to be also rather fast on a local setup, and decoding seems to be fine as well, as reported by the RTP stats in the browser).


I tried to play with RTP timestamps, but that did not change anything, the only variable that seems to impact the latency is the encoding thread 'frequency'.


Any idea on what could be creating this latency ? Am I missing a parameter ?


Also, here are the codec options I use : (they do not influence the latency that much from what I experimented)


profile = high
preset = llhq # low latency, high quality
tune = zerolatency
zerolatency = 1
g = 2 * FRAME_PER_SECOND # key frame every 2s
strict-gop = 1



UPDATE


I have the impression that the jitter buffer on Chrome's side is kind of preventing the rtp packets to be decoded immediately, is that possible ?


UPDATE 2


- 

- Using RTP
playout-delay
header extension slightly reduced the latency. - Setting
playoudDelayHint
in browser also seemed to help a bit






UPDATE 3


After further investigations, I came to the conclusion that it was not possible to get a lower latency by going through the standard webrtc for video streams, as there is little to no control on the video buffering, which I believe to be responsible of the observed latency.


On a side note, I tried to check how google stadia is doing it, as they seem to use WebRTC as well, but they use some in-house frameworks... (plus Chrome is the only supported browser)


- Using RTP
-
libswscale/aarch64 : add another hscale specialization
13 août 2022, par Swinney, Jonathanlibswscale/aarch64 : add another hscale specialization
This specialization handles the case where filtersize is 4 mod 8, e.g.
12, 20, etc. Aarch64 was previously using the c function for this case.
This implementation speeds up that case significantly.hscale_8_to_15__fs_12_dstW_512_c : 6234.1
hscale_8_to_15__fs_12_dstW_512_neon : 1505.6Signed-off-by : Jonathan Swinney <jswinney@amazon.com>
Signed-off-by : Martin Storsjö <martin@martin.st>