
Recherche avancée
Autres articles (39)
-
MediaSPIP Core : La Configuration
9 novembre 2010, parMediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...) -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Configuration spécifique d’Apache
4 février 2011, parModules spécifiques
Pour la configuration d’Apache, il est conseillé d’activer certains modules non spécifiques à MediaSPIP, mais permettant d’améliorer les performances : mod_deflate et mod_headers pour compresser automatiquement via Apache les pages. Cf ce tutoriel ; mode_expires pour gérer correctement l’expiration des hits. Cf ce tutoriel ;
Il est également conseillé d’ajouter la prise en charge par apache du mime-type pour les fichiers WebM comme indiqué dans ce tutoriel.
Création d’un (...)
Sur d’autres sites (5820)
-
How Can I record a live stream using ffmpeg ? (Without any encoding or transcoding)
26 février 2015, par Sina DavaniHow Can I record a live stream using ffmpeg ? (Without any encoding or transcoding)
I have a program written using ffmpeg. It captures a live stream and then plays it on the display (a simple video player).
What I need now is the ability to save the input stream in a file on the disk so it could be played later using a standard video player.
Can anyone please give me a simple example that would show how it is done ? When I am writing the captured packets from the input stream directly in to a file ; at the end the file is corrupted and it is unusable. I did try to set the header for the file ; but that didn’t work either.The Live stream comes from an IP camera ; so it is already encoded in the H264 format. so I am guessing it should be possible to directly write it in a file without any encoding.
while (av_read_frame(pFormatCtx, &packet) >= 0)
{
// Is this a packet from the video stream?
if (packet.stream_index == videoStreamID)
{
// Decode video frame
if (avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet) <= 0) input->sizeOfDroppedPacketStructures += packet.size;
partTimeReadSize += packet.size;
++numberOfPacketsForFrame;
// Did we get a video frame?
if (frameFinished)
{
// Convert the image from its native format to RGB
sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
frameNumber = pCodecCtx->frame_number;
Mat cvFrame(pCodecCtx->height, pCodecCtx->width, CV_8UC3, pFrameRGB->data[0], pFrameRGB->linesize[0]);
//namedWindow(input->URL.c_str(), WINDOW_AUTOSIZE);
imshow(input->URL.c_str(), cvFrame);
cvFrame.release();
QueryPerformanceCounter(&now);
double waitTime = (1.0 / av_q2d(pCodecCtx->framerate)) * 1000.0 - ((double)(now.QuadPart - lastTime.QuadPart) / (double)freq.QuadPart) * 1000;
if (waitTime < 0) waitTime = 0;
Sleep(waitTime);
QueryPerformanceCounter(&lastTime);
}
}
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
SDL_PollEvent(&event);
switch (event.type) {
case SDL_QUIT:
SDL_Quit();
exit(0);
break;
default:
break;
}
} -
avformat : add fields to AVProgram/AVStream for PMT change tracking
18 mai 2018, par Aman Guptaavformat : add fields to AVProgram/AVStream for PMT change tracking
These fields will allow the mpegts demuxer to expose details about
the PMT/program which created the AVProgram and its AVStreams.In mpegts, a PMT which advertises streams has a version number
which can be incremented at any time. When the version changes,
the pids which correspond to each of it's streams can also change.Since ffmpeg creates a new AVStream per pid by default, an API user
needs the ability to (a) detect when the PMT changed, and (b) tell
which AVStream were added to replace earlier streams.This has been a long-standing issue with ffmpeg's handling of mpegts
streams with PMT changes, and I found two related patches in the wild
that attempt to solve the same problem :The first is in MythTV's ffmpeg fork, where they added a
void (*streams_changed)(void*) ; to AVFormatContext and call it from
their fork of the mpegts demuxer whenever the PMT changes.The second was proposed by XBMC in
https://ffmpeg.org/pipermail/ffmpeg-devel/2012-December/135036.html,
where they created a new AVMEDIA_TYPE_DATA stream with id=0 and
attempted to send packets to it whenever the PMT changed.Signed-off-by : Aman Gupta <aman@tmm1.net>
-
swscale/cms : add color management subsystem
29 novembre 2024, par Niklas Haasswscale/cms : add color management subsystem
The underlying color mapping logic was ported as straightforwardly as possible
from libplacebo, although the API and glue code has been very heavily
refactored / rewritten. In particular, the generalization of gamut mapping
methods is replaced by a single ICC intent selection, and constants have been
hard-coded.To minimize the amount of overall operations, this gamut mapping LUT now embeds
a direct end-to-end transformation to the output color space ; something that
libplacebo does in shaders, but which is prohibitively expensive in software.In order to preserve compatibility with dynamic tone mapping without severely
regressing performance, we add the ability to generate a pair of "split" LUTS,
one for encoding the input and output to the perceptual color space, and a
third to embed the tone mapping operation. Additionally, this intermediate
space could be used for additional subjective effect (e.g. changing
saturation or brightness).The big downside of the new approach is that generating a static color mapping
LUT is now fairly slow, as the chromaticity lobe peaks have to be recomputed
for every single RGB value, since correlated RGB colors are not necessarily
aligned in ICh space. Generating a split 3DLUT significantly alleviates this
problem because the expensive step is done as part of the IPT input LUT, which
can share the same hue peak calculation at least for all input intensities.