
Recherche avancée
Autres articles (10)
-
Selection of projects using MediaSPIP
2 mai 2011, parThe examples below are representative elements of MediaSPIP specific uses for specific projects.
MediaSPIP farm @ Infini
The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...) -
Sélection de projets utilisant MediaSPIP
29 avril 2011, parLes exemples cités ci-dessous sont des éléments représentatifs d’usages spécifiques de MediaSPIP pour certains projets.
Vous pensez avoir un site "remarquable" réalisé avec MediaSPIP ? Faites le nous savoir ici.
Ferme MediaSPIP @ Infini
L’Association Infini développe des activités d’accueil, de point d’accès internet, de formation, de conduite de projets innovants dans le domaine des Technologies de l’Information et de la Communication, et l’hébergement de sites. Elle joue en la matière un rôle unique (...) -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.
Sur d’autres sites (2270)
-
avformat/matroskaenc : Actually apply timestamp offset for Opus
31 août 2022, par Andreas Rheinhardtavformat/matroskaenc : Actually apply timestamp offset for Opus
Matroska generally requires timestamps to be nonnegative, but
there is an exception : Data that corresponds to encoder delay
and is not supposed to be output anyway can have a negative
timestamp. This is achieved by using the CodecDelay header
field : The demuxer has to subtract this value from the raw
(nonnegative) timestamps of the corresponding track.
Therefore the muxer has to add this value first to write
this raw timestamp.Support for writing CodecDelay has been added in FFmpeg commit
d92b1b1babe69268971863649c225e1747358a74 and in Libav commit
a1aa37dd0b96710d4a17718198a3f56aea2040c1. The former simply
wrote the header field and did not apply any timestamp offsets,
leading to desynchronisation (if one uses multiple tracks).
The latter applied it at two places, but not at the one where
it actually matters, namely in mkv_write_block(), leading to
the same desynchronisation as with the former commit. It furthermore
used the wrong stream timebase to convert the delay to the
stream's timebase, as the conversion used the timebase from
before avpriv_set_pts_info().When the latter was merged in 82e4f39883932c1b1e5c7792a1be12dec6ab603d,
it was only done in a deactivated state that still did not
offset the timestamps when muxing due to "assertion failures
and av sync errors". a1aa37dd0b96710d4a17718198a3f56aea2040c1
made it definitely more likely to run into assertion failures
(namely if the relative block timestamp doesn't fit into an int16_t).Yet all of the above issues have been fixed (in commits
962d63157322466a9a82f9f9d84c1b6f1b582f65,
5d3953a5dcfd5f71391b7f34908517eb6f7e5146 and
4ebeab15b037a21f195696cef1f7522daf42f3ee. This commit therefore
enables applying CodecDelay, fixing ticket #7182.There is just one slight regression from this : If one has input
with encoder delay where the first timestamp is negative, but
the pts of the part of the data that is actually intended to be
output is nonnegative, then the timestamps will currently by default
be shifted to make them nonnegative before they reach the muxer ;
the muxer will then ensure that the shifted timestamps are retained.
Before this commit, the muxer did not ensure this ; instead the
timestamps that the demuxer will output were shifted and
if the first timestamp of the actually intended output was zero
before shifting, then this unintentional shift just cancels
the shift performed before the packet reached the muxer.
(But notice that this only applies if all the tracks use the same
CodecDelay, or the relative sync between tracks will be impaired.)
This happens in the matroska-opus-remux and matroska-ogg-opus-remux
FATE tests. Future commits will forward the information that
the Matroska muxer has a limited capability to handle negative
timestamps so that the shifting in libavformat can take advantage
of it.Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
-
Live AAC and H264 data into live stream
10 mai 2024, par tzulegerI have a remote camera that captures H264 encoded video data and AAC encoded audio data, places the data into a custom ring buffer, which then is sent to a Node.js socket server, where the packet of information is detected as audio or video and then handled accordingly. That data should turn into a live stream, the protocol doesn't matter, but the delay has to be around 4 seconds and can be played on iOS and Android devices.


After reading hundreds of pages of documentation, questions, or solutions on the internet, I can't seem to find anything about handling two separate streams of AAC and H264 data to create a live stream.


Despite attempting many different ways of achieving this goal, even having a working implementation of HLS, I want to revisit ALL options of live streaming, and I am hoping someone out there can give me advice or guidance to specific documentation on how to achieve this goal.


To be specific, this is our goal :


- 

- Stream AAC and H264 data from remote cellular camera to a server which will do some work on that data to live stream to one user (possibly more users in the future) on a mobile iOS or Android device
- Delay of the live stream should be a maximum of 4 seconds, if the user has bad signal, then a longer delay is okay, as we obviously cannot do anything about that.
- We should not have to re-encode our data. We've explored WebRTC, but that requires OPUS audio packets and thus requires us to re-encode the data, which would be expensive for our server to run.








Any and all help, ranging from re-visiting an old approach we took to exploring new ones, is appreciated.


I can provide code snippets as well for our current implementation of LLHLS if it helps, but I figured this post is already long enough.


I've tried FFmpeg with named pipes, I expected it to just work, but FFmpeg kept blocking on the first named pipe input. I thought of just writing the data out to two files and then using FFmpeg, but it's continuous data and I don't have enough knowledge on FFmpeg on how I could use that type of implementation to create one live stream.


I've tried implementing our own RTSP server on the camera using Gstreamer (our camera had its RTSP server stripped out, wasn't my call) but the camera's flash storage cannot handle having GStreamer on it, so that wasn't an option.


My latest attempt was using a derivation of hls-parser to create an HLS manifest and mux.js to create MP4 containers for
.m4s
fragmented mp4 files and do an HLS live stream. This was my most successful attempt, where we successfully had a live stream going, but the delay was up to 16 seconds, as one would expect with HLS live streaming. We could drop the target duration down to 2 seconds and get about 6-8 seconds delay, but this could be unreliable, as these cameras could have no signal making it relatively expensive to send so many IDR frames with such low bandwidth.

With the delay being the only factor left, I attempted to upgrade the implementation to support Apple's Low Latency HLS. It seems to work, as the right partial segments are getting requested and everything that makes LLHLS is working as intended, but the delay isn't going down when played on iOS' native AVPlayer, as a matter of fact, it looks like it worsened.


I would also like to disclaim, my knowledge on media streaming is fairly limited. I've learned most of what I speak of in this post over the past 3 months by reading RFCs, documentation, and stackoverflow/reddit questions and answers. If anything appears to be confusing, it might be just my lack of understanding of it.


-
How to improve Desktop capture performance and quality with ffmpeg [closed]
6 novembre 2024, par Francesco BramatoI'm developing a game capture feature from my Electron app. I'm working on this since a while and tried a lot of different parameters combinations, now i'm running out of ideas :)


I've read tons of ffmpeg documentation, SO posts, other sites, but i'm not really a ffmpeg expert or video editing pro.


This is how it works now :


The app spawn an ffmpeg command based on user's settings :


- 

- Output format (mp4, mkv, avi)
- Framerate (12, 24, 30, 60)
- Codec (X264, NVidia NVENC, AMD AMF)
- Bitrate (from 1000 to 10000kpbs)
- Presets (for X264)
- Audio output (a dshow device like StereoMix or VB-Cable) and Audio input (a dshow device like the Microphone)
- Final Resolution (720p, 1080p, 2K, Original Size)
















The command executed, as far, is :


ffmpeg.exe -nostats -hide_banner -hwaccel cuda -hwaccel_output_format cuda -f gdigrab -draw_mouse 0 -framerate 60 -offset_x 0 -offset_y 0 -video_size 2560x1440 -i desktop -f dshow -i audio=@device_cm_{33D9A762-90C8-11D0-BD43-00A0C911CE86}\wave_{D61FA53D-FA37-4BE7-BE2F-4005F94790BB} -ar 44100 -colorspace bt709 -color_trc bt709 -color_primaries bt709 -c:v h264_nvenc -b:v 6000k -preset slow -rc cbr -profile:v high -g 60 -acodec aac -maxrate 6000k -bufsize 12000k -pix_fmt yuv420p -f mpegts -



one of the settings is the recording mode : full game session or replay buffer.
In case of full game session, the output is a file, for replay buffer is stdout.


The output format is mpegts because, as far i have read in a lot of places, the video stream can be cut in any moment.


Replays are cutted with different past and future duration based on game events.


In full game session, the replays are cutted directly from the mpegts.


In replay buffer mode, the ffmpeg stdout is redirect to the app that record the buffer (1 or 2 minutes), when the replay must be created, the app saves on the disk the buffer section according to past and future duration and with another ffmpeg command, copy it to a mp4 or mkv final file.


Generally speaking, this works reliably.


There are few issues :


- 

- nonetheless i ask ffmpeg to capture at 60fps, the final result is at 30fps (using
-r 60
will speed up the final result) - some user has reported FPS drops in-game, specially when using NVidia NVENC (and having a NVIDIA GPU), using X264 seems save some FPS
- colors are strange compared to original, what i see on screen, they seem washed out - i could have solved this using
-colorspace bt709 -color_trc bt709 -color_primaries bt709
but don't know if is the right choice - NVIDIA NVenc with any other preset that is not
slow
creates videos terribly laggy










here two examples, 60 FPS, NVIDIA NVENC (slow, 6000kbs, MP4


Recorded by my app : https://www.youtube.com/watch?v=Msm62IwHdlk


Recorded by OB with nearly same settings : https://youtu.be/WuHoLh26W7E


Hope someone can help me


Thanks !