
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (97)
-
Soumettre bugs et patchs
10 avril 2011Un logiciel n’est malheureusement jamais parfait...
Si vous pensez avoir mis la main sur un bug, reportez le dans notre système de tickets en prenant bien soin de nous remonter certaines informations pertinentes : le type de navigateur et sa version exacte avec lequel vous avez l’anomalie ; une explication la plus précise possible du problème rencontré ; si possibles les étapes pour reproduire le problème ; un lien vers le site / la page en question ;
Si vous pensez avoir résolu vous même le bug (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;
Sur d’autres sites (8112)
-
Error using FFMPEG to convert each input image into H264 compiling in Visual Studio running in MevisLab
21 février 2014, par user3012914I am creating a ML Module in MevisLab Framework, I am using FFMPEG to convert each image i get into a H264 Video and save it after I get all the frames. But unfortunately I have problem allocating the output buffer size. The application crashes when I include this in my code.If I am not including it, the output file size is just 4kb. Nothing is stored in it.
I am also not very sure whether it is correct way of getting the HBitmap into the Encoder. Would be great to have your suggestions.
My Code :
BITMAPINFO bitmapInfo;
HDC hdc;
ZeroMemory(&bitmapInfo, sizeof(bitmapInfo));
BITMAPINFOHEADER &bitmapInfoHeader = bitmapInfo.bmiHeader;
bitmapInfoHeader.biSize = sizeof(bitmapInfoHeader);
bitmapInfoHeader.biWidth = _imgWidth;
bitmapInfoHeader.biHeight = _imgHeight;
bitmapInfoHeader.biPlanes = 1;
bitmapInfoHeader.biBitCount = 24;
bitmapInfoHeader.biCompression = BI_RGB;
bitmapInfoHeader.biSizeImage = ((bitmapInfoHeader.biWidth * bitmapInfoHeader.biBitCount / 8 + 3) & 0xFFFFFFFC) * bitmapInfoHeader.biHeight;
bitmapInfoHeader.biXPelsPerMeter = 10000;
bitmapInfoHeader.biYPelsPerMeter = 10000;
bitmapInfoHeader.biClrUsed = 0;
bitmapInfoHeader.biClrImportant = 0;
//RGBQUAD* Ref = new RGBQUAD[_imgWidth,_imgHeight];
HDC hdcscreen = GetDC(0);
hdc = CreateCompatibleDC(hdcscreen);
ReleaseDC(0, hdcscreen);
_hbitmap = CreateDIBSection(hdc, (BITMAPINFO*) &bitmapInfoHeader, DIB_RGB_COLORS, &_bits, NULL, NULL);To get the BitMap I use the above code. Then I allocate the Codec Context as followed
c->bit_rate = 400000;
// resolution must be a multiple of two
c->width = 1920;
c->height = 1080;
// frames per second
frame_rate = _framesPerSecondFld->getIntValue();
//AVRational rational = {1,10};
//c->time_base = (AVRational){1,25};
//c->time_base = (AVRational){1,25};
c->gop_size = 10; // emit one intra frame every ten frames
c->max_b_frames = 1;
c->keyint_min = 1; //minimum GOP size
c->time_base.num = 1; // framerate numerator
c->time_base.den = _framesPerSecondFld->getIntValue();
c->i_quant_factor = (float)0.71; // qscale factor between P and I frames
c->pix_fmt = AV_PIX_FMT_RGB32;
std::string msg;
msg.append("Context is stored");
_messageFld->setStringValue(msg.c_str());I create the Bitmap Image as followed from the input
PagedImage *inImg = getUpdatedInputImage(0);
ML_CHECK(inImg);
ImageVector imgExt = inImg->getImageExtent();
if ((imgExt.x = _imgWidth) && (imgExt.y == _imgHeight))
{
if (((imgExt.x % 4)==0) && ((imgExt.y % 4) == 0))
{
// read out input image and write output image into video
// get input image as an array
void* imgData = NULL;
SubImageBox imageBox(imgExt); // get the whole image
getTile(inImg, imageBox, MLuint8Type, &imgData);
iData = (MLuint8*)imgData;
int r = 0; int g = 0;int b = 0;
// since we have only images with
// a z-ext of 1, we can compute the c stride as follows
int cStride = _imgWidth * _imgHeight;
uint8_t offset = 0;
// pointer into the bitmap that is
// used to write images into the avi
UCHAR* dst = (UCHAR*)_bits;
for (int y = _imgHeight-1; y >= 0; y--)
{ // reversely scan the image. if y-rows of DIB are set in normal order, no compression will be available.
offset = _imgWidth * y;
for (int x = 0; x < _imgWidth; x++)
{
if (_isGreyValueImage)
{
r = iData[offset + x];
*dst++ = (UCHAR)r;
*dst++ = (UCHAR)r;
*dst++ = (UCHAR)r;
}
else
{
b = iData[offset + x]; // windows bitmap need reverse order: bgr instead of rgb
g = iData[offset + x + cStride ];
r = iData[offset + x + cStride + cStride];
*dst++ = (UCHAR)r;
*dst++ = (UCHAR)g;
*dst++ = (UCHAR)b;
}
// alpha channel in input image is ignored
}
}Then I add it to the Encoder as followed as write as H264
in_width = c->width;
in_height = c->height;
out_width = c->width;
out_height = c->height;
ibytes = avpicture_get_size(PIX_FMT_BGR32, in_width, in_height);
obytes = avpicture_get_size(PIX_FMT_YUV420P, out_width, out_height);
outbuf_size = 100000 + c->width*c->height*(32>>3); // allocate output buffer
outbuf = static_cast(malloc(outbuf_size));
if(!obytes)
{
std::string msg;
msg.append("Bytes cannot be allocated");
_messageFld->setStringValue(msg.c_str());
}
else
{
std::string msg;
msg.append("Bytes allocation done");
_messageFld->setStringValue(msg.c_str());
}
//create buffer for the output image
inbuffer = (uint8_t*)av_malloc(ibytes);
outbuffer = (uint8_t*)av_malloc(obytes);
inbuffer = (uint8_t*)dst;
//create ffmpeg frame structures. These do not allocate space for image data,
//just the pointers and other information about the image.
AVFrame* inpic = avcodec_alloc_frame();
AVFrame* outpic = avcodec_alloc_frame();
//this will set the pointers in the frame structures to the right points in
//the input and output buffers.
avpicture_fill((AVPicture*)inpic, inbuffer, PIX_FMT_BGR32, in_width, in_height);
avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, out_width, out_height);
av_image_alloc(outpic->data, outpic->linesize, c->width, c->height, c->pix_fmt, 1);
inpic->data[0] += inpic->linesize[0]*(_imgHeight-1); // flipping frame
inpic->linesize[0] = -inpic->linesize[0];
if(!inpic)
{
std::string msg;
msg.append("Image is empty");
_messageFld->setStringValue(msg.c_str());
}
else
{
std::string msg;
msg.append("Picture has allocations");
_messageFld->setStringValue(msg.c_str());
}
//create the conversion context
fooContext = sws_getContext(in_width, in_height, PIX_FMT_BGR32, out_width, out_height, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
//perform the conversion
sws_scale(fooContext, inpic->data, inpic->linesize, 0, in_height, outpic->data, outpic->linesize);
//out_size = avcodec_encode_video(c, outbuf,outbuf_size, outpic);
if(!out_size)
{
std::string msg;
msg.append("Outsize is not valid");
_messageFld->setStringValue(msg.c_str());
}
else
{
std::string msg;
msg.append("Outsize is valid");
_messageFld->setStringValue(msg.c_str());
}
fwrite(outbuf, 1, out_size, f);
if(!fwrite)
{
std::string msg;
msg.append("Frames couldnt be written");
_messageFld->setStringValue(msg.c_str());
}
else
{
std::string msg;
msg.append("Frames written to the file");
_messageFld->setStringValue(msg.c_str());
}
// for (;out_size; i++)
// {
out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
std::string msg;
msg.append("Writing Frames");
_messageFld->setStringValue(msg.c_str());// encode the delayed frames
_numFramesFld->setIntValue(_numFramesFld->getIntValue()+1);
fwrite(outbuf, 1, out_size, f);
// }
outbuf[0] = 0x00;
outbuf[1] = 0x00; // add sequence end code to have a real mpeg file
outbuf[2] = 0x01;
outbuf[3] = 0xb7;
fwrite(outbuf, 1, 4, f);
}Then close and clean the Image Buffer and file
ML_TRACE_IN("MovieCreator::_endRecording()")
if (_numFramesFld->getIntValue() == 0)
{
_messageFld->setStringValue("Empty movie, nothing saved.");
}
else
{
_messageFld->setStringValue("Movie written to disk.");
_numFramesFld->setIntValue(0);
if (_hbitmap)
{
DeleteObject(_hbitmap);
}
if (c != NULL)
{
av_free(outbuffer);
av_free(inpic);
av_free(outpic);
fclose(f);
avcodec_close(c); // freeing memory
free(outbuf);
av_free(c);
}
}}
I think the Main Problem is over here !!
//out_size = avcodec_encode_video(c, outbuf,outbuf_size, outpic);
-
Calculate PSNR and MSE for individual frames using ffmpeg
26 juin 2015, par Mayank AgarwalI have an avi file.i am converting avi file to .mp4 codec H.264 and ain second case to .mp4 file codec H.265.Now i want to calculate the PSNR/MSE/MSAD between the ref file(avi file) and the converted mp4 file using ffmpeg.Came across ffmpeg command line filters for PSNR and SSIM calculation but it gives the average PSNR value not the PSNR value frame by frame.Also i want to do it using code and not using command line.Read several examples in demuxing.c it is separating the whole file into frames in av_read_frame before calling decode
but how can i convert pkt to frame and able to calculate PSNR or MSE values.Regards
Mayank -
Generate individual HLS-compatible .ts segments on-demand by downloading as little bytes as possible from a remote input file
27 janvier 2017, par Romain CointepasI’m trying to generate individual HLS-compatible .ts segments on-demand by downloading/reading as little bytes as possible from a remote input file (hosted on a server supporting byte-ranges requests).
One of the application for this would be to be able to transcode and play on Apple TV (via Airplay) a remote file that is not Airplay compatible, without having to download the entire file first.
I am generating the playlist myself, and I have access to the ffprobe results for the remote file (that gives video duration, etc.).
I have something working that plays via Airplay but with small video and audio glitches between each segments when I use the following command to generate each segment :
ffmpeg -ss 60 -t 6 -i http://s3.amazonaws.com/misc-12345/avicii.vob -f mpegts -map 0:v:0 -map 0:a:0 -c:v libx264 -bsf:v h264_mp4toannexb -force_key_frames "expr:gte(t,n_forced*6)" -forced-idr 1 -pix_fmt yuv420p -colorspace bt709 -c:a aac -async 1 -preset ultrafast pipe:1
Note : above command is for segment 11.ts, and in the m3u8 playlist I advertise each segment duration as 6 seconds.
Here is a Youtube video showing the audio/video glitches between segments :
https://www.youtube.com/watch?v=0vMwgbSfsu0The segment or hls modules of ffmpeg can’t be used because they both generate all the segments at once.
I’ve been struggling on this for some days now and I would really appreciate some help !