
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (35)
-
Gestion de la ferme
2 mars 2010, parLa ferme est gérée dans son ensemble par des "super admins".
Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
Dans un premier temps il utilise le plugin "Gestion de mutualisation" -
Demande de création d’un canal
12 mars 2010, parEn fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...) -
Création définitive du canal
12 mars 2010, parLorsque votre demande est validée, vous pouvez alors procéder à la création proprement dite du canal. Chaque canal est un site à part entière placé sous votre responsabilité. Les administrateurs de la plateforme n’y ont aucun accès.
A la validation, vous recevez un email vous invitant donc à créer votre canal.
Pour ce faire il vous suffit de vous rendre à son adresse, dans notre exemple "http://votre_sous_domaine.mediaspip.net".
A ce moment là un mot de passe vous est demandé, il vous suffit d’y (...)
Sur d’autres sites (5182)
-
armv6 : Accelerate ff_imdct_half for general case (mdct_bits != 6)
11 juillet 2014, par Ben Avisonarmv6 : Accelerate ff_imdct_half for general case (mdct_bits != 6)
The previous implementation targeted DTS Coherent Acoustics, which only
requires mdct_bits == 6. This relatively small size lent itself to
unrolling the loops a small number of times, and encoding offsets
calculated at assembly time within the load/store instructions of each
iteration.In the more general case (codecs such as AAC and AC3) much larger arrays
are used - mdct_bits == [8, 9, 11]. The old method does not scale for
these cases, so more integer registers are used with non-unrolled versions
of the loops (and with some stack spillage). The postrotation filter loop
is still unrolled by a factor of 2 to permit the double-buffering of some
VFP registers to facilitate overlap of neighbouring iterations.I benchmarked the result by measuring the number of gperftools samples
that hit anywhere in the AAC decoder (starting from aac_decode_frame())
or specifically in ff_imdct_half_c / ff_imdct_half_vfp, for the same
example AAC stream :Before After
Mean StdDev Mean StdDev Confidence Change
aac_decode_frame 2368.1 35.8 2117.2 35.3 100.0% +11.8%
ff_imdct_half_* 457.5 22.4 251.2 16.2 100.0% +82.1%Signed-off-by : Martin Storsjö <martin@martin.st>
-
Merge commit ’37b3361e755361d4ff14a2973df001c0140d98d6’
6 novembre 2014, par Michael Niedermayer -
FFMPEG Encoding Issues
16 octobre 2014, par madprogrammer2015I am having issues encoding screen captures, into a h.264 file for viewing. The program below is cobbled together from examples here and here. The first example, uses an older version of the ffmpeg api. So I tried to update that example for use in my program. The file is created and has something written to it, but when I view the file. The encoded images are all distorted. I am able to run the video encoding example from the ffmpeg api successfully. This is my first time posting, so if I missed anything please let me know.
I appreciate any assistance that is given.
My program :
#include
#include <string>
#include <sstream>
#include
#include <iostream>
#include
extern "C"{
#include
#include <libavcodec></libavcodec>avcodec.h>
#include <libavutil></libavutil>imgutils.h>
#include <libswscale></libswscale>swscale.h>
#include <libavutil></libavutil>opt.h>
}
using namespace std;
void ScreenShot(const char* BmpName, uint8_t *frame)
{
HWND DesktopHwnd = GetDesktopWindow();
RECT DesktopParams;
HDC DevC = GetDC(DesktopHwnd);
GetWindowRect(DesktopHwnd,&DesktopParams);
DWORD Width = DesktopParams.right - DesktopParams.left;
DWORD Height = DesktopParams.bottom - DesktopParams.top;
DWORD FileSize = sizeof(BITMAPFILEHEADER)+sizeof(BITMAPINFOHEADER)+(sizeof(RGBTRIPLE)+1*(Width*Height*4));
char *BmpFileData = (char*)GlobalAlloc(0x0040,FileSize);
PBITMAPFILEHEADER BFileHeader = (PBITMAPFILEHEADER)BmpFileData;
PBITMAPINFOHEADER BInfoHeader = (PBITMAPINFOHEADER)&BmpFileData[sizeof(BITMAPFILEHEADER)];
BFileHeader->bfType = 0x4D42; // BM
BFileHeader->bfSize = sizeof(BITMAPFILEHEADER);
BFileHeader->bfOffBits = sizeof(BITMAPFILEHEADER)+sizeof(BITMAPINFOHEADER);
BInfoHeader->biSize = sizeof(BITMAPINFOHEADER);
BInfoHeader->biPlanes = 1;
BInfoHeader->biBitCount = 32;
BInfoHeader->biCompression = BI_RGB;
BInfoHeader->biHeight = Height;
BInfoHeader->biWidth = Width;
RGBTRIPLE *Image = (RGBTRIPLE*)&BmpFileData[sizeof(BITMAPFILEHEADER)+sizeof(BITMAPINFOHEADER)];
RGBTRIPLE color;
//pPixels = (RGBQUAD **)new RGBQUAD[sizeof(BITMAPFILEHEADER) + sizeof(BITMAPINFOHEADER)];
int start = clock();
HDC CaptureDC = CreateCompatibleDC(DevC);
HBITMAP CaptureBitmap = CreateCompatibleBitmap(DevC,Width,Height);
SelectObject(CaptureDC,CaptureBitmap);
BitBlt(CaptureDC,0,0,Width,Height,DevC,0,0,SRCCOPY|CAPTUREBLT);
GetDIBits(CaptureDC,CaptureBitmap,0,Height,frame,(LPBITMAPINFO)BInfoHeader, DIB_RGB_COLORS);
int end = clock();
cout << "it took " << end - start << " to capture a frame" << endl;
DWORD Junk;
HANDLE FH = CreateFileA(BmpName,GENERIC_WRITE,FILE_SHARE_WRITE,0,CREATE_ALWAYS,0,0);
WriteFile(FH,BmpFileData,FileSize,&Junk,0);
CloseHandle(FH);
GlobalFree(BmpFileData);
}
void video_encode_example(const char *filename, int codec_id)
{
AVCodec *codec;
AVCodecContext *c= NULL;
int i, ret, x, y, got_output;
FILE *f;
AVFrame *frame;
AVPacket pkt;
uint8_t endcode[] = { 0, 0, 1, 0xb7 };
printf("Encode video file %s\n", filename);
/* find the mpeg1 video encoder */
codec = avcodec_find_encoder(AV_CODEC_ID_H264);
if (!codec) {
fprintf(stderr, "Codec not found\n");
cin.get();
exit(1);
}
c = avcodec_alloc_context3(codec);
if (!c) {
fprintf(stderr, "Could not allocate video codec context\n");
cin.get();
exit(1);
}
/* put sample parameters */
c->bit_rate = 400000;
/* resolution must be a multiple of two */
c->width = 352;
c->height = 288;
/* frames per second */
c->time_base.num=1;
c->time_base.den = 25;
c->gop_size = 10; /* emit one intra frame every ten frames */
c->max_b_frames=1;
c->pix_fmt = AV_PIX_FMT_YUV420P;
if(codec_id == AV_CODEC_ID_H264)
av_opt_set(c->priv_data, "preset", "slow", 0);
/* open it */
if (avcodec_open2(c, codec, NULL) < 0) {
fprintf(stderr, "Could not open codec\n");
exit(1);
}
f = fopen(filename, "wb");
if (!f) {
fprintf(stderr, "Could not open %s\n", filename);
exit(1);
}
frame = av_frame_alloc();
if (!frame) {
fprintf(stderr, "Could not allocate video frame\n");
exit(1);
}
frame->format = c->pix_fmt;
frame->width = c->width;
frame->height = c->height;
/* the image can be allocated by any means and av_image_alloc() is
just the most convenient way if av_malloc() is to be used */
ret = av_image_alloc(frame->data, frame->linesize, c->width, c->height, c->pix_fmt, 32);
if (ret < 0) {
fprintf(stderr, "Could not allocate raw picture buffer\n");
exit(1);
}
/* encode 1 second of video */
for(i=0;i<250;i++) {
av_init_packet(&pkt);
pkt.data = NULL; // packet data will be allocated by the encoder
pkt.size = 0;
fflush(stdout);
/* prepare a dummy image */
/* Y */
for(y=0;yheight;y++) {
for(x=0;xwidth;x++) {
frame->data[0][y * frame->linesize[0] + x] = x + y + i * 3;
}
}
/* Cb and Cr */
for(y=0;yheight/2;y++) {
for(x=0;xwidth/2;x++) {
frame->data[1][y * frame->linesize[1] + x] = 128 + y + i * 2;
frame->data[2][y * frame->linesize[2] + x] = 64 + x + i * 5;
}
}
frame->pts = i;
/* encode the image */
ret = avcodec_encode_video2(c, &pkt, frame, &got_output);
if (ret < 0) {
fprintf(stderr, "Error encoding frame\n");
exit(1);
}
if (got_output) {
printf("Write frame %3d (size=%5d)\n", i, pkt.size);
fwrite(pkt.data, 1, pkt.size, f);
av_free_packet(&pkt);
}
}
/* get the delayed frames */
for (got_output = 1; got_output; i++) {
fflush(stdout);
ret = avcodec_encode_video2(c, &pkt, NULL, &got_output);
if (ret < 0) {
fprintf(stderr, "Error encoding frame\n");
exit(1);
}
if (got_output) {
printf("Write frame %3d (size=%5d)\n", i, pkt.size);
fwrite(pkt.data, 1, pkt.size, f);
av_free_packet(&pkt);
}
}
/* add sequence end code to have a real mpeg file */
fwrite(endcode, 1, sizeof(endcode), f);
fclose(f);
avcodec_close(c);
av_free(c);
av_freep(&frame->data[0]);
av_frame_free(&frame);
printf("\n");
}
void write_video_frame()
{
}
int lineSizeOfFrame(int width)
{
return (width*24 + 31)/32 * 4;//((width*24 / 8) + 3) & ~3;//(width*24 + 31)/32 * 4;
}
int getScreenshotWithCursor(uint8_t* frame)
{
int successful = 0;
HDC screen, bitmapDC;
HBITMAP screen_bitmap;
screen = GetDC(NULL);
RECT DesktopParams;
HWND desktop = GetDesktopWindow();
GetWindowRect(desktop, &DesktopParams);
int width = DesktopParams.right;
int height = DesktopParams.bottom;
bitmapDC = CreateCompatibleDC(screen);
screen_bitmap = CreateCompatibleBitmap(screen, width, height);
SelectObject(bitmapDC, screen_bitmap);
if (BitBlt(bitmapDC, 0, 0, width, height, screen, 0, 0, SRCCOPY))
{
int pos_x, pos_y;
HICON hcur;
ICONINFO icon_info;
CURSORINFO cursor_info;
cursor_info.cbSize = sizeof(CURSORINFO);
if (GetCursorInfo(&cursor_info))
{
if (cursor_info.flags == CURSOR_SHOWING)
{
hcur = CopyIcon(cursor_info.hCursor);
if (GetIconInfo(hcur, &icon_info))
{
pos_x = cursor_info.ptScreenPos.x - icon_info.xHotspot;
pos_y = cursor_info.ptScreenPos.y - icon_info.yHotspot;
DrawIcon(bitmapDC, pos_x, pos_y, hcur);
if (icon_info.hbmColor) DeleteObject(icon_info.hbmColor);
if (icon_info.hbmMask) DeleteObject(icon_info.hbmMask);
}
}
}
int header_size = sizeof(BITMAPINFOHEADER) + 256*sizeof(RGBQUAD);
size_t line_size = lineSizeOfFrame(width);
PBITMAPINFO lpbi = (PBITMAPINFO) malloc(header_size);
lpbi->bmiHeader.biSize = header_size;
lpbi->bmiHeader.biWidth = width;
lpbi->bmiHeader.biHeight = height;
lpbi->bmiHeader.biPlanes = 1;
lpbi->bmiHeader.biBitCount = 24;
lpbi->bmiHeader.biCompression = BI_RGB;
lpbi->bmiHeader.biSizeImage = height*line_size;
lpbi->bmiHeader.biXPelsPerMeter = 0;
lpbi->bmiHeader.biYPelsPerMeter = 0;
lpbi->bmiHeader.biClrUsed = 0;
lpbi->bmiHeader.biClrImportant = 0;
if (GetDIBits(bitmapDC, screen_bitmap, 0, height, (LPVOID)frame, lpbi, DIB_RGB_COLORS))
{
int i;
uint8_t *buf_begin = frame;
uint8_t *buf_end = frame + line_size*(lpbi->bmiHeader.biHeight - 1);
void *temp = malloc(line_size);
for (i = 0; i < lpbi->bmiHeader.biHeight / 2; ++i)
{
memcpy(temp, buf_begin, line_size);
memcpy(buf_begin, buf_end, line_size);
memcpy(buf_end, temp, line_size);
buf_begin += line_size;
buf_end -= line_size;
}
cout << *buf_begin << endl;
free(temp);
successful = 1;
}
free(lpbi);
}
DeleteObject(screen_bitmap);
DeleteDC(bitmapDC);
ReleaseDC(NULL, screen);
return successful;
}
int main()
{
RECT DesktopParams;
HWND desktop = GetDesktopWindow();
GetWindowRect(desktop, &DesktopParams);
int width = DesktopParams.right;
int height = DesktopParams.bottom;
uint8_t *frame = (uint8_t *)malloc(width * height);
AVCodec *codec;
AVCodecContext *codecContext = NULL;
AVPacket packet;
FILE *f;
AVFrame *pictureYUV = NULL;
AVFrame *pictureRGB;
avcodec_register_all();
codec = avcodec_find_encoder(AV_CODEC_ID_H264);
if(!codec)
{
cout << "codec not found!" << endl;
cin.get();
return 1;
}
else
{
cout << "codec h265 found!" << endl;
}
codecContext = avcodec_alloc_context3(codec);
codecContext->bit_rate = width * height * 4;
codecContext->width = width;
codecContext->height = height;
codecContext->time_base.num = 1;
codecContext->time_base.den = 250;
codecContext->gop_size = 10;
codecContext->max_b_frames = 1;
codecContext->keyint_min = 1;
codecContext->i_quant_factor = (float)0.71; // qscale factor between P and I frames
codecContext->b_frame_strategy = 20; ///// find out exactly what this does
codecContext->qcompress = (float)0.6; ///// find out exactly what this does
codecContext->qmin = 20; // minimum quantizer
codecContext->qmax = 51; // maximum quantizer
codecContext->max_qdiff = 4; // maximum quantizer difference between frames
codecContext->refs = 4; // number of reference frames
codecContext->trellis = 1;
codecContext->pix_fmt = AV_PIX_FMT_YUV420P;
codecContext->codec_id = AV_CODEC_ID_H264;
codecContext->codec_type = AVMEDIA_TYPE_VIDEO;
if(avcodec_open2(codecContext, codec, NULL) < 0)
{
cout << "couldn't open codec" << endl;
cout << stderr << endl;
cin.get();
return 1;
}
else
{
cout << "opened h265 codec!" << endl;
cin.get();
}
f = fopen("test.h264", "wb");
if(!f)
{
cout << "Unable to open file" << endl;
return 1;
}
struct SwsContext *img_convert_ctx = sws_getContext(codecContext->width, codecContext->height, PIX_FMT_RGB32, codecContext->width,
codecContext->height, codecContext->pix_fmt, SWS_BILINEAR, NULL, NULL, NULL);
int got_output = 0, i = 0;
uint8_t encode[] = { 0, 0, 1, 0xb7 };
try
{
for(i = 0; i < codecContext->time_base.den; i++)
{
av_init_packet(&packet);
packet.data = NULL;
packet.size = 0;
pictureRGB = av_frame_alloc();
pictureYUV = av_frame_alloc();
getScreenshotWithCursor(frame);
//ScreenShot("example.bmp", frame);
int nbytes = avpicture_get_size(AV_PIX_FMT_YUV420P, codecContext->width, codecContext->height); // allocating outbuffer
uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes*sizeof(uint8_t));
pictureRGB = av_frame_alloc();
pictureYUV = av_frame_alloc();
avpicture_fill((AVPicture*)pictureRGB, frame, PIX_FMT_RGB32, codecContext->width, codecContext->height); // fill image with input screenshot
avpicture_fill((AVPicture*)pictureYUV, outbuffer, PIX_FMT_YUV420P, codecContext->width, codecContext->height);
av_image_alloc(pictureYUV->data, pictureYUV->linesize, codecContext->width, codecContext->height, codecContext->pix_fmt, 32);
sws_scale(img_convert_ctx, pictureRGB->data, pictureRGB->linesize, 0, codecContext->height, pictureYUV->data, pictureYUV->linesize);
pictureYUV->pts = i;
avcodec_encode_video2(codecContext, &packet, pictureYUV, &got_output);
if(got_output)
{
printf("Write frame %3d (size=%5d)\n", i, packet.size);
fwrite(packet.data, 1, packet.size, f);
av_free_packet(&packet);
}
//av_frame_free(&pictureRGB);
//av_frame_free(&pictureYUV);
}
for(got_output = 1; got_output; i++)
{
fflush(stdout);
avcodec_encode_video2(codecContext, &packet, NULL, &got_output);
if (got_output) {
printf("Write frame %3d (size=%5d)\n", i, packet.size);
fwrite(packet.data, 1, packet.size, f);
av_free_packet(&packet);
}
}
}
catch(std::exception ex)
{
cout << ex.what() << endl;
}
avcodec_close(codecContext);
av_free(codecContext);
av_freep(&pictureYUV->data[0]);
//av_frame_free(&picture);
fwrite(encode, 1, sizeof(encode), f);
fclose(f);
cin.get();
return 0;
}
</iostream></sstream></string>