
Recherche avancée
Autres articles (28)
-
Emballe Médias : Mettre en ligne simplement des documents
29 octobre 2010, parLe plugin emballe médias a été développé principalement pour la distribution mediaSPIP mais est également utilisé dans d’autres projets proches comme géodiversité par exemple. Plugins nécessaires et compatibles
Pour fonctionner ce plugin nécessite que d’autres plugins soient installés : CFG Saisies SPIP Bonux Diogène swfupload jqueryui
D’autres plugins peuvent être utilisés en complément afin d’améliorer ses capacités : Ancres douces Légendes photo_infos spipmotion (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Gestion générale des documents
13 mai 2011, parMédiaSPIP ne modifie jamais le document original mis en ligne.
Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)
Sur d’autres sites (6394)
-
Stream OpenGL framebuffer over HTTP (via FFmpeg)
17 juin 2016, par mOflI have an OpenGL application of which rendered images need to be streamed over internet to mobile clients. Previously, it sufficed to simply record the rendering into a video file, which is already working, and now this should be extended to subsequent streaming.
What is working right now :
- Render a scene to an OpenGL framebuffer object
- Capture the FBO content using NvIFR
- Encode it to H.264 using NvENC (no CPU round trip required)
- Download the encoded frame to host memory as a byte array
- Append this frame to a video file
None of this steps involves FFmpeg or any other library so far. I now want to replace the last step with "Stream the current frame’s byte array over internet" and I assume that using FFmpeg and FFserver would be a reasonable choice for this. Am I correct ? If not, what would be the proper way ?
If so, how do I approach this within my C++ code ? As pointed out, the frame is already encoded. Also, there is no sound or other stuff, simply a H.264 encoded frame as byte array that is updated irregularly and should be converted into a steady video stream. I assume that this would be FFmpeg’s job and that the subsequent streaming via FFserver would be simple from there. What I don’t know is how to feed my data to FFmpeg in the first place, as all FFmpeg tutorials I found (in a non-exhaustive search) work on a file or webcam/capture device as data source, not volatile data in main memory.
The file mentioned above that I am already able to create is a C++ file stream to which I append each single frame, meaning that different framerates of video and rendering are not treated correctly. This also needs to be taken care of at some point.
Can somebody point me in the right direction ? Can I forward data from my application to FFmpeg to build a proper video feed without writing to the hard disk ? Tutorials are greatly appreciated. By the way FFmpeg/FFserver is not mandatory. If you have a better idea for streaming of OpenGL framebuffer contents, I’m eager to know.
-
Stream OpenGL framebuffer over HTTP (via FFmpeg)
16 juin 2022, par mOflI have an OpenGL application of which rendered images need to be streamed over internet to mobile clients. Previously, it sufficed to simply record the rendering into a video file, which is already working, and now this should be extended to subsequent streaming.



What is working right now :



- 

- Render a scene to an OpenGL framebuffer object
- Capture the FBO content using NvIFR
- Encode it to H.264 using NvENC (no CPU round trip required)
- Download the encoded frame to host memory as a byte array
- Append this frame to a video file













None of this steps involves FFmpeg or any other library so far. I now want to replace the last step with "Stream the current frame's byte array over internet" and I assume that using FFmpeg and FFserver would be a reasonable choice for this. Am I correct ? If not, what would be the proper way ?



If so, how do I approach this within my C++ code ? As pointed out, the frame is already encoded. Also, there is no sound or other stuff, simply a H.264 encoded frame as byte array that is updated irregularly and should be converted into a steady video stream. I assume that this would be FFmpeg's job and that the subsequent streaming via FFserver would be simple from there. What I don't know is how to feed my data to FFmpeg in the first place, as all FFmpeg tutorials I found (in a non-exhaustive search) work on a file or webcam/capture device as data source, not volatile data in main memory.



The file mentioned above that I am already able to create is a C++ file stream to which I append each single frame, meaning that different framerates of video and rendering are not treated correctly. This also needs to be taken care of at some point.



Can somebody point me in the right direction ? Can I forward data from my application to FFmpeg to build a proper video feed without writing to the hard disk ? Tutorials are greatly appreciated. By the way FFmpeg/FFserver is not mandatory. If you have a better idea for streaming of OpenGL framebuffer contents, I'm eager to know.


-
Custom IO writes only header, rest of frames seems omitted
18 septembre 2023, par DanielI'm using libavformat to read packets from rtsp and remux it to mp4 (fragmented).


Video frames are intact, meaning I don't want to transcode/modify/change anything.
Video frames shall be remuxed into mp4 in their original form. (i.e. : NALUs shall remain the same).


I have updated libavformat to latest (currently 4.4).


Here is my snippet :


//open input, probesize is set to 32, we don't need to decode anything
avformat_open_input

//open output with custom io
avformat_alloc_output_context2(&ofctx,...);
ofctx->pb = avio_alloc_context(buffer, bufsize, 1/*write flag*/, 0, 0, &writeOutput, 0);
ofctx->flags |= AVFMT_FLAG_NOBUFFER | AVFMT_FLAG_FLUSH_PACKETS | AVFMT_FLAG_CUSTOM_IO;

avformat_write_header(...);

//loop
av_read_frame()
LOGPACKET_DETAILS //<- this works, packets are coming
av_write_frame() //<- this doesn't work, my write callback is not called. Also tried with av_write_interleaved_frame, not seem to work.

int writeOutput(void *opaque, uint8_t *buffer, int buffer_size) {
 printf("writeOutput: writing %d bytes: ", buffer_size);
}



avformat_write_header
works, it prints the header correctly.

I'm looking for the reason on why my custom IO is not called after a frame has been read.


There must be some more flags should be set to ask avformat to don't care about decoding, just write out whatever comes in.


More information :
Input stream is a VBR encoded H264. It seems
av_write_frame
calls my write function only in case an SPS, PPS or IDR frame. Non-IDR frames are not passed at all.

Update


I found out if I request IDR frame at every second (I can ask it from the encoder),
writeOutput
is called at every second.

I created a test : after a client joins, I requested the encoder to create IDRs @1Hz for 10 times. Libav calls
writeOutput
at 1Hz for 10 seconds, but then encoder sets itself back to create IDR only at every 10 seconds. And then libav callswriteOutput
only at every 10s, which makes my decoder fail. In case 1Hz IDRs, decoder is fine.