
Recherche avancée
Autres articles (32)
-
Les vidéos
21 avril 2011, parComme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...) -
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Possibilité de déploiement en ferme
12 avril 2011, parMediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)
Sur d’autres sites (3543)
-
Segfault while trying to fill the yuv image for rtsp streaming
21 septembre 2016, par tankyxI am capturing the video stream from a window, and I want to restream it to my rtsp proxy server. However, it seems I can’t write the frame properly, but I can show the said frame in a SDL window. Here is my code :
int StreamHandler::storeStreamData()
{
// Allocate video frame
pFrame = av_frame_alloc();
// Allocate an AVFrame structure
pFrameRGB = av_frame_alloc();
if (pFrameRGB == NULL)
throw myExceptions("Error : Can't alloc the frame.");
// Determine required buffer size and allocate buffer
numBytes = avpicture_get_size(AV_PIX_FMT_YUV420P, pCodecCtx->width,
pCodecCtx->height);
buffer = (uint8_t *)av_malloc(numBytes * sizeof(uint8_t));
// Assign appropriate parts of buffer to image planes in pFrameRGB
avpicture_fill((AVPicture *)pFrameRGB, buffer, AV_PIX_FMT_YUV420P,
pCodecCtx->width, pCodecCtx->height);
//InitSdlDrawBack();
// initialize SWS context for software scaling
sws_ctx = sws_getContext(pCodecCtx->width,
pCodecCtx->height,
pCodecCtx->pix_fmt,
pCodecCtx->width,
pCodecCtx->height,
pCodecCtx->pix_fmt,
SWS_LANCZOS,
NULL,
NULL,
NULL
);
SetPixelArray();
FfmpegEncoder enc("rtsp://127.0.0.1:1935/live/myStream");
i = 0;
while (av_read_frame(pFormatCtx, &packet) >= 0) {
if (packet.stream_index == videoindex) {
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
if (frameFinished) {
i++;
//DrawFrame();
sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height,
pFrameRGB->data, pFrameRGB->linesize);
enc.encodeFrame(pFrameRGB, i);
}
}
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
}
// Free the RGB image
av_free(buffer);
av_frame_free(&pFrameRGB);
// Free the YUV frame
av_frame_free(&pFrame);
// Close the codecs
avcodec_close(pCodecCtx);
avcodec_close(pCodecCtxOrig);
// Close the video file
avformat_close_input(&pFormatCtx);
return 0;
}
void StreamHandler::SetPixelArray()
{
yPlaneSz = pCodecCtx->width * pCodecCtx->height;
uvPlaneSz = pCodecCtx->width * pCodecCtx->height / 4;
yPlane = (Uint8*)malloc(yPlaneSz);
uPlane = (Uint8*)malloc(uvPlaneSz);
vPlane = (Uint8*)malloc(uvPlaneSz);
if (!yPlane || !uPlane || !vPlane)
throw myExceptions("Error : Can't create pixel array.");
uvPitch = pCodecCtx->width / 2;
}Here I fill the YUV image and write the packet.
void FfmpegEncoder::encodeFrame(AVFrame * frame, int frameCount)
{
AVPacket pkt = { 0 };
int got_pkt;
av_init_packet(&pkt);
frame->pts = frameCount;
FillYuvImage(frame, frameCount, this->pCodecCtx->width, this->pCodecCtx->height);
if (avcodec_encode_video2(this->pCodecCtx, &pkt, frame, &got_pkt) < 0)
throw myExceptions("Error: failed to encode the frame. FfmpegEncoder.cpp l:61\n");
//if the frame is well encoded
if (got_pkt) {
pkt.stream_index = this->st->index;
pkt.pts = av_rescale_q_rnd(pkt.pts, this->pCodecCtx->time_base, this->st->time_base, AVRounding(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
if (av_write_frame(this->outFormatCtx, &pkt) < 0)
throw myExceptions("Error: failed to write video frame. FfmpegEncoder.cpp l:68\n");
}
}
void FfmpegEncoder::FillYuvImage(AVFrame * pict, int frame_index, int width, int height)
{
int x, y, i;
i = frame_index;
for (y = 0; y < height; y++)
{
for (x = 0; x < width / 2; x++)
pict->data[0][y * pict->linesize[0] + x] = x + y + i * 3;
}
for (y = 0; y < height; y++)
{
for (x = 0; x < width / 2; x++)
{
pict->data[1][y * pict->linesize[1] + x] = 128 + y + i * 2;
pict->data[2][y * pict->linesize[2] + x] = 64 + y + i * 5; //segault here
}
}
}The "FillYuvImage" method is copied from a FFMPEG example, but It does not work for me. If I don’t call it, the "av_write_frame" function won’t work (segfault too).
EDIT : Here is my output context and codec initialization.
FfmpegEncoder::FfmpegEncoder(char *url)
{
AVRational tmp_time_base;
AVDictionary* options = NULL;
this->pCodec = avcodec_find_encoder(AV_CODEC_ID_H264);
if (this->pCodec == NULL)
throw myExceptions("Error: Can't initialize the encoder. FfmpegEncoder.cpp l:9\n");
this->pCodecCtx = avcodec_alloc_context3(this->pCodec);
//Alloc output context
if (avformat_alloc_output_context2(&outFormatCtx, NULL, "rtsp", url) < 0)
throw myExceptions("Error: Can't alloc stream output. FfmpegEncoder.cpp l:17\n");
this->st = avformat_new_stream(this->outFormatCtx, this->pCodec);
if (this->st == NULL)
throw myExceptions("Error: Can't create stream . FfmpegEncoder.cpp l:22\n");
av_dict_set(&options, "vprofile", "main", 0);
av_dict_set(&options, "tune", "zerolatency", 0);
tmp_time_base.num = 1;
tmp_time_base.den = 60;
//TODO : parse these values
this->pCodecCtx->bit_rate = 3000000;
this->pCodecCtx->width = 1280;
this->pCodecCtx->height = 720;
//This set the fps. 60fps at this point.
this->pCodecCtx->time_base = tmp_time_base;
//Add a intra frame every 12 frames
this->pCodecCtx->gop_size = 12;
this->pCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P;
//Open Codec, using the context + x264 options
if (avcodec_open2(this->pCodecCtx, this->pCodec, &options) < 0)
throw myExceptions("Error: Can't open the codec. FfmpegEncoder.cpp l:43\n");
if (avcodec_copy_context(this->st->codec, this->pCodecCtx) != 0) {
throw myExceptions("Error : Can't copy codec context. FfmpegEncoder.cpp : l.46");
}
av_dump_format(this->outFormatCtx, 0, url, 1);
if (avformat_write_header(this->outFormatCtx, NULL) != 0)
throw myExceptions("Error: failed to connect to RTSP server. FfmpegEncoder.cpp l:48\n");
} -
Cleaning up after av_frame_get_buffer
4 novembre 2016, par Jason CThere are two aspects of my question. I’m using libav, ffmpeg, 3.1.
First, how do you appropriately dispose of a frame whose buffer has been allocated with
av_frame_get_buffer
? E.g. :AVFrame *frame = av_frame_alloc();
frame->width = ...;
frame->height = ...;
frame->format = ...;
av_frame_get_buffer(frame, ...);Do any buffers have to be freed manually, beyond the call to
av_frame_free(frame)
? The documentation doesn’t mention anything special, but in my experience the ffmpeg documentation often leaves out important details, or at least hides them in places far away from the obvious spots. I took a look at the code forav_frame_free
andav_frame_unref
but it branched out quite a bit and I couldn’t quite determine if it covered everything.
Second, if something beyond
av_frame_free
needs to be done, then is there any catch-all way to clean up a frame if you don’t know how its data has been allocated ? For example, assumingsomeBuffer
is already allocated with the appropriate size :AVFrame *frame2 = av_frame_alloc();
frame2->width = ...;
frame2->height = ...;
frame2->format = ...;
av_image_fill_arrays(frame2->data, frame2->linesize, someBuffer,
frame2->format, frame2->width, frame2->height, 1);Is there a way to free both
frame
andframe2
in the above examples using the exact same code ? That isframe
and its data should be freed, andframe2
should be freed, but notsomeBuffer
, since libav did not allocate it. -
ffmpeg sws_scale converting from YUV420P to RGB24 results in wrong color values
3 novembre 2016, par DrazurhI’m using sws_scale to convert a video from YUV420P to RGB24. The resulting RGB values are wrong, looking much more saturated/contrasted. For example, the first pixel should have an RGB value of (223,73,30) but using sws_scale results in (153,0,0).
Here’s the code I’m using :
uint8_t *buffer = NULL;
int numBytes;
// Determine required buffer size and allocate buffer
numBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width,
pCodecCtx->height);
buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
// Assign appropriate parts of buffer to image planes in pFrameRGB
// Note that pFrameRGB is an AVFrame, but AVFrame is a superset
// of AVPicture
std::cout << "Filling picture of size " << pCodecCtx->width <<" x "<height << std::endl;
avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24,
pCodecCtx->width, pCodecCtx->height);
// initialize SWS context for software scaling
std::cout << "initializing SWS context\n";
sws_ctx = sws_getContext(pCodecCtx->width,
pCodecCtx->height,
pCodecCtx->pix_fmt,
pCodecCtx->width,
pCodecCtx->height,
PIX_FMT_RGB24,
SWS_FAST_BILINEAR,
NULL,
NULL,
NULL
);
while(frameFinished == 0)
{
if(av_read_frame(pFormatCtx, &packet)<0){
std::cerr << "Could not read frame!\n";
return false;
}
if(packet.stream_index==videoStream)
{
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
}
}
sws_scale(sws_ctx, (uint8_t const * const *)pFrame->data,
pFrame->linesize, 0, pCodecCtx->height,
pFrameRGB->data, pFrameRGB->linesize);The values in pFrameRGB are incorrect. I’ve tried troubleshooting this for hours :-( Any ideas on how to track down my mistake ?
Here’s a link to the repo. The offending code is in Icosahedron Video Player/mesh.cpp, Mesh::LoadVideo and Mesh::NextFrame().