Newest 'x264' Questions - Stack Overflow
Les articles publiés sur le site
-
What format_name to use in avformat_alloc_output_context2 ?
7 janvier 2013, par TishuCan anyone produce a readable H264 file from this FFMPEG tutorial example? The only thing I have changed is the output format on line 350:
avformat_alloc_output_context2(&oc, NULL, "h264", filename);
Running it with FFMPEG 1.0.1+libx64 v129 produces a 6MB
.3gp
file that is unreadable by most players (including VLC). When I load it I can see that it contains all frames and I can decode and view them successfully, but for some reason most players will just fail to open it.Does anyone have more success?
-
FFMPEG tutorial example produces a corrupted video
6 janvier 2013, par TishuCan anyone produce a readable H264 file from this FFMPEG tutorial example? The only thing I have changed is the output format on line 350:
avformat_alloc_output_context2(&oc, NULL, "h264", filename);
Running it with FFMPEG 1.0.1+libx64 v129 produces a 6MB
.3gp
file that is unreadable by most players (including VLC). When I load it I can see that it contains all frames and I can decode and view them successfully, but for some reason most players will just fail to open it.Does anyone have more success?
-
FFMPEG x264 encoding on Android - error with lookahead
6 janvier 2013, par TishuI am using FFMPEG + x264 on Android to encode YUV420 frames to a video file. I use the following code on each frame to encode them:
avcodec_encode_video2(gVideoWriteCodecCtx, &packet, pCurrentFrame, &gotPacket);
On the first few calls, the frame buffer gets filled and nothing is encoded. When the first encoding happens, a call is made to x264_lookahead_get_frames. I can see there that my frame array is correctly populated, but the first item is NULL. As a consequence, in x264_weights_analyse the reference frame gotten as frames[p0] is NULL and I get an exception there.
slicetype.c, the first frame in "frames" is NULL
if( h->param.analyse.i_weighted_pred && b == p1 ) { x264_emms(); x264_weights_analyse( h, fenc, frames[p0], 1 ); w = fenc->weight[0]; }
And the exception happens there, ref is NULL
static void x264_weights_analyse( x264_t *h, x264_frame_t *fenc, x264_frame_t *ref, int b_lookahead ) { int i_delta_index = fenc->i_frame - ref->i_frame - 1;
I surely am missing something as I am sure this encoder works for most people :) Does anyone have an idea why this first frame in the "frames" array is null?
Many thanks
-
Faster encoding of realtime 3d graphics with opengl and x264
26 décembre 2012, par cloudravenI am working on a system that sends a compressed video to a client from 3d graphics that are done in the server as soon as they are rendered. I already have the code working, but I feel it could be much faster (and it is already a bottleneck in the system)
Here is what I am doing:
First I grab the framebuffer
glReadBuffer( GL_FRONT ); glReadPixels( 0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, buffer );
Then I flip the framebuffer, because there is a weird bug with swsScale (which I am using for colorspace conversion) that flips the image vertically when I convert. I am flipping in advance, nothing fancy.
void VerticalFlip(int width, int height, byte* pixelData, int bitsPerPixel) { byte* temp = new byte[width*bitsPerPixel]; height--; //remember height array ends at height-1 for (int y = 0; y < (height+1)/2; y++) { memcpy(temp,&pixelData[y*width*bitsPerPixel],width*bitsPerPixel); memcpy(&pixelData[y*width*bitsPerPixel],&pixelData[(height-y)*width*bitsPerPixel],width*bitsPerPixel); memcpy(&pixelData[(height-y)*width*bitsPerPixel],temp,width*bitsPerPixel); } delete[] temp; }
Then I convert it to YUV420p
convertCtx = sws_getContext(width, height, PIX_FMT_RGB24, width, height, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL); uint8_t *src[3]= {buffer, NULL, NULL}; sws_scale(convertCtx, src, &srcstride, 0, height, pic_in.img.plane, pic_in.img.i_stride);
Then I pretty much just call the x264 encoder. I am already using the zerolatency preset.
int frame_size = x264_encoder_encode(_encoder, &nals, &i_nals, _inputPicture, &pic_out);
My guess is that there should be a faster way to do this. Capturing the frame and converting it to YUV420p. It would be nice to convert it to YUV420p in the GPU and only after that copying it to system memory, and hopefully there is a way to do color conversion without the need to flip.
If there is no better way, at least this question may help someone trying to do this, to do it the same way I did.
-
Trouble syncing libavformat/ffmpeg with x264 and RTP
26 décembre 2012, par Jacob PeddicordI've been working on some streaming software that takes live feeds from various kinds of cameras and streams over the network using H.264. To accomplish this, I'm using the x264 encoder directly (with the "zerolatency" preset) and feeding NALs as they are available to libavformat to pack into RTP (ultimately RTSP). Ideally, this application should be as real-time as possible. For the most part, this has been working well.
Unfortunately, however, there is some sort of synchronization issue: any video playback on clients seems to show a few smooth frames, followed by a short pause, then more frames; repeat. Additionally, there appears to be approximately a 4-second delay. This happens with every video player I've tried: Totem, VLC, and basic gstreamer pipes.
I've boiled it all down to a somewhat small test case:
#include #include #include #include #include
avformat.h> #include swscale.h> #define WIDTH 640 #define HEIGHT 480 #define FPS 30 #define BITRATE 400000 #define RTP_ADDRESS "127.0.0.1" #define RTP_PORT 49990 struct AVFormatContext* avctx; struct x264_t* encoder; struct SwsContext* imgctx; uint8_t test = 0x80; void create_sample_picture(x264_picture_t* picture) { // create a frame to store in x264_picture_alloc(picture, X264_CSP_I420, WIDTH, HEIGHT); // fake image generation // disregard how wrong this is; just writing a quick test int strides = WIDTH / 8; uint8_t* data = malloc(WIDTH * HEIGHT * 3); memset(data, test, WIDTH * HEIGHT * 3); test = (test << 1) | (test >> (8 - 1)); // scale the image sws_scale(imgctx, (const uint8_t* const*) &data, &strides, 0, HEIGHT, picture->img.plane, picture->img.i_stride); } int encode_frame(x264_picture_t* picture, x264_nal_t** nals) { // encode a frame x264_picture_t pic_out; int num_nals; int frame_size = x264_encoder_encode(encoder, nals, &num_nals, picture, &pic_out); // ignore bad frames if (frame_size < 0) { return frame_size; } return num_nals; } void stream_frame(uint8_t* payload, int size) { // initalize a packet AVPacket p; av_init_packet(&p); p.data = payload; p.size = size; p.stream_index = 0; p.flags = AV_PKT_FLAG_KEY; p.pts = AV_NOPTS_VALUE; p.dts = AV_NOPTS_VALUE; // send it out av_interleaved_write_frame(avctx, &p); } int main(int argc, char* argv[]) { // initalize ffmpeg av_register_all(); // set up image scaler // (in-width, in-height, in-format, out-width, out-height, out-format, scaling-method, 0, 0, 0) imgctx = sws_getContext(WIDTH, HEIGHT, PIX_FMT_MONOWHITE, WIDTH, HEIGHT, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL); // set up encoder presets x264_param_t param; x264_param_default_preset(¶m, "ultrafast", "zerolatency"); param.i_threads = 3; param.i_width = WIDTH; param.i_height = HEIGHT; param.i_fps_num = FPS; param.i_fps_den = 1; param.i_keyint_max = FPS; param.b_intra_refresh = 0; param.rc.i_bitrate = BITRATE; param.b_repeat_headers = 1; // whether to repeat headers or write just once param.b_annexb = 1; // place start codes (1) or sizes (0) // initalize x264_param_apply_profile(¶m, "high"); encoder = x264_encoder_open(¶m); // at this point, x264_encoder_headers can be used, but it has had no effect // set up streaming context. a lot of error handling has been ommitted // for brevity, but this should be pretty standard. avctx = avformat_alloc_context(); struct AVOutputFormat* fmt = av_guess_format("rtp", NULL, NULL); avctx->oformat = fmt; snprintf(avctx->filename, sizeof(avctx->filename), "rtp://%s:%d", RTP_ADDRESS, RTP_PORT); if (url_fopen(&avctx->pb, avctx->filename, URL_WRONLY) < 0) { perror("url_fopen failed"); return 1; } struct AVStream* stream = av_new_stream(avctx, 1); // initalize codec AVCodecContext* c = stream->codec; c->codec_id = CODEC_ID_H264; c->codec_type = AVMEDIA_TYPE_VIDEO; c->flags = CODEC_FLAG_GLOBAL_HEADER; c->width = WIDTH; c->height = HEIGHT; c->time_base.den = FPS; c->time_base.num = 1; c->gop_size = FPS; c->bit_rate = BITRATE; avctx->flags = AVFMT_FLAG_RTP_HINT; // write the header av_write_header(avctx); // make some frames for (int frame = 0; frame < 10000; frame++) { // create a sample moving frame x264_picture_t* pic = (x264_picture_t*) malloc(sizeof(x264_picture_t)); create_sample_picture(pic); // encode the frame x264_nal_t* nals; int num_nals = encode_frame(pic, &nals); if (num_nals < 0) printf("invalid frame size: %d\n", num_nals); // send out NALs for (int i = 0; i < num_nals; i++) { stream_frame(nals[i].p_payload, nals[i].i_payload); } // free up resources x264_picture_clean(pic); free(pic); // stream at approx 30 fps printf("frame %d\n", frame); usleep(33333); } return 0; } This test shows black lines on a white background that should move smoothly to the left. It has been written for ffmpeg 0.6.5 but the problem can be reproduced on 0.8 and 0.10 (from what I've tested so far). I've taken some shortcuts in error handling to make this example as short as possible while still showing the problem, so please excuse some of the nasty code. I should also note that while an SDP is not used here, I have tried using that already with similar results. The test can be compiled with:
gcc -g -std=gnu99 streamtest.c -lswscale -lavformat -lx264 -lm -lpthread -o streamtest
It can be played with gtreamer directly:
gst-launch udpsrc port=49990 ! application/x-rtp,payload=96,clock-rate=90000 ! rtph264depay ! decodebin ! xvimagesink
You should immediately notice the stuttering. One common "fix" I've seen all over the Internet is to add sync=false to the pipeline:
gst-launch udpsrc port=49990 ! application/x-rtp,payload=96,clock-rate=90000 ! rtph264depay ! decodebin ! xvimagesink sync=false
This causes playback to be smooth (and near-realtime), but is a non-solution and only works with gstreamer. I'd like to fix the problem at the source. I've been able to stream with near-identical parameters using raw ffmpeg and haven't had any issues:
ffmpeg -re -i sample.mp4 -vcodec libx264 -vpre ultrafast -vpre baseline -b 400000 -an -f rtp rtp://127.0.0.1:49990 -an
So clearly I'm doing something wrong. But what is it?