
Recherche avancée
Médias (91)
-
MediaSPIP Simple : futur thème graphique par défaut ?
26 septembre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Video
-
avec chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
sans chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
config chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
SPIP - plugins - embed code - Exemple
2 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
Autres articles (40)
-
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users.
Sur d’autres sites (4132)
-
Problems with frame rate on video conversion using ffmpeg with libx264 [migrated]
29 mai 2013, par Lars SchroeterI have problems with transcoding some videos. I ran the most simple ffmpeg command and it takes very long time and the output file is about 10 times bigger. If I provide the frame rate parameter -r it works well (small file, fast transcoding). What is the problem and how can I solve it ? I don't want to set a fixed frame rate because I guess it's better to leave it the same as source, isn't it ?.
Maybe the problem is something else, because I found many examples in web where the -r option isn't used. Also transcoding to a different format or with a different source works well without -r option (I tried with ffmpeg 0.7.15 and also 1.2.1). The videos are provided by the users of my website and automatically converted to be suitable for the web. So I need the most general command for automatic conversion.
In the following ffmpeg output you will find this two suspicious messages :
- Frame rate very high for a muxer not effciciently supporting it. Please consider specifiying a lower framerate, a different muxer or -vsync 2
- MB rate (36000000) > level limit (983040)
The ffmpeg command and output (without -r option) :
ffmpeg -i '/tmp/standort_aquarium.mp4' -vcodec libx264 output.mp4
ffmpeg version 0.7.15, Copyright (c) 2000-2013 the FFmpeg developers built on Feb 22 2013 07:18:58 with gcc 4.4.5 configuration : —enable-libdc1394 —prefix=/usr —extra-cflags='-Wall -g ' —cc='ccache cc' —enable-shared —enable-libmp3lame —enable-gpl —enable-libvorbis —enable-pthreads —enable-libfaac —enable-libxvid —enable-postproc —enable-x11grab —enable-libgsm —enable-libtheora —enable-libopencore-amrnb —enable-libopencore-amrwb —enable-libx264 —enable-libspeex —enable-nonfree —disable-stripping —enable-avfilter —enable-libdirac —disable-decoder=libdirac —enable-libfreetype —enable-libschroedinger —disable-encoder=libschroedinger —enable-version3 —enable-libopenjpeg —enable-libvpx —enable-librtmp —extra-libs=-lgcrypt —disable-altivec —disable-armv5te —disable-armv6 —disable-vis
libavutil 50. 43. 0 / 50. 43. 0
libavcodec 52.123. 0 / 52.123. 0
libavformat 52.111. 0 / 52.111. 0
libavdevice 52. 5. 0 / 52. 5. 0
libavfilter 1. 80. 0 / 1. 80. 0
libswscale 0. 14. 1 / 0. 14. 1
libpostproc 51. 2. 0 / 51. 2. 0
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/tmp/standort_aquarium.mp4' :
Metadata :
major_brand : mp42
minor_version : 0
compatible_brands : mp423gp4isom
creation_time : 2013-04-19 15:04:05
Duration : 00:00:18.24, start : 0.000000, bitrate : 2095 kb/s
Stream #0.0(und) : Video : mpeg4, yuv420p, 640x480 [PAR 1:1 DAR 4:3], 2001 kb/s, 14.97 fps, 30k tbr, 30k tbn, 30k tbc
Metadata :
creation_time : 2013-04-19 15:04:05
Stream #0.1(und) : Audio : aac, 48000 Hz, mono, s16, 96 kb/s
Metadata :
creation_time : 2013-04-19 15:04:05
File 'output.mp4' already exists. Overwrite ? [y/N] y
[mp4 @ 0x20eed80] Frame rate very high for a muxer not effciciently supporting it.
Please consider specifiying a lower framerate, a different muxer or -vsync 2
[buffer @ 0x20f8820] w:640 h:480 pixfmt:yuv420p tb:1/1000000 sar:1/1 sws_param :
[libx264 @ 0x20efde0] Default settings detected, using medium profile
[libx264 @ 0x20efde0] using SAR=1/1
[libx264 @ 0x20efde0] MB rate (36000000) > level limit (983040)
[libx264 @ 0x20efde0] using cpu capabilities : MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2
[libx264 @ 0x20efde0] profile High, level 5.1
[libx264 @ 0x20efde0] 264 - core 118 - H.264/MPEG-4 AVC codec - Copyleft 2003-2011 - http://www.videolan.org/x264.html - options : cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'output.mp4' :
Metadata :
major_brand : mp42
minor_version : 0
compatible_brands : mp423gp4isom
creation_time : 2013-04-19 15:04:05
encoder : Lavf52.111.0
Stream #0.0(und) : Video : libx264, yuv420p, 640x480 [PAR 1:1 DAR 4:3], q=2-31, 200 kb/s, 30k tbn, 30k tbc
Metadata :
creation_time : 2013-04-19 15:04:05
Stream #0.1(und) : Audio : libfaac, 48000 Hz, mono, s16, 64 kb/s
Metadata :
creation_time : 2013-04-19 15:04:05
Stream mapping :
Stream #0.0 -> #0.0
Stream #0.1 -> #0.1
Press [q] to stop, [?] for help
frame=542630 fps=132 q=33.0 Lsize= 77226kB time=00:00:18.08 bitrate=34976.2kbits/s dup=542358 drop=0
video:68604kB audio:143kB global headers:0kB muxing overhead 12.333275%
frame I:2174 Avg QP:18.72 size : 25040
[libx264 @ 0x20efde0] frame P:136846 Avg QP:25.27 size : 56
[libx264 @ 0x20efde0] frame B:403610 Avg QP:32.99 size : 20
[libx264 @ 0x20efde0] consecutive B-frames : 0.8% 0.0% 0.1% 99.1%
[libx264 @ 0x20efde0] mb I I16..4 : 5.5% 83.3% 11.1%
[libx264 @ 0x20efde0] mb P I16..4 : 0.0% 0.0% 0.0% P16..4 : 0.5% 0.0% 0.0% 0.0% 0.0% skip:99.4%
[libx264 @ 0x20efde0] mb B I16..4 : 0.0% 0.0% 0.0% B16..8 : 0.0% 0.0% 0.0% direct : 0.0% skip:100.0% L0:21.2% L1:78.8% BI : 0.0%
[libx264 @ 0x20efde0] 8x8 transform intra:83.1% inter:85.2%
[libx264 @ 0x20efde0] coded y,uvDC,uvAC intra : 91.2% 95.8% 80.7% inter : 0.0% 0.1% 0.0%
[libx264 @ 0x20efde0] i16 v,h,dc,p : 13% 40% 12% 35%
[libx264 @ 0x20efde0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu : 19% 34% 15% 4% 4% 5% 6% 7% 8%
[libx264 @ 0x20efde0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu : 20% 38% 6% 4% 6% 6% 8% 6% 6%
[libx264 @ 0x20efde0] i8c dc,h,v,p : 39% 32% 19% 10%
[libx264 @ 0x20efde0] Weighted P-Frames : Y:0.0% UV:0.0%
[libx264 @ 0x20efde0] ref P L0 : 91.5% 5.2% 2.8% 0.4% 0.0%
[libx264 @ 0x20efde0] ref B L0 : 55.7% 43.5% 0.8%
[libx264 @ 0x20efde0] ref B L1 : 97.9% 2.1%
[libx264 @ 0x20efde0] kb/s:31071.04The ffmpeg command and output with the -r 24 option :
ffmpeg -i '/tmp/standort_aquarium.mp4' -r 30000/1001 -vcodec libx264 output.mp4
ffmpeg version 0.7.15, Copyright (c) 2000-2013 the FFmpeg developers
built on Feb 22 2013 07:18:58 with gcc 4.4.5
configuration : —enable-libdc1394 —prefix=/usr —extra-cflags='-Wall -g ' —cc='ccache cc' —enable-shared —enable-libmp3lame —enable-gpl —enable-libvorbis —enable-pthreads —enable-libfaac —enable-libxvid —enable-postproc —enable-x11grab —enable-libgsm —enable-libtheora —enable-libopencore-amrnb —enable-libopencore-amrwb —enable-libx264 —enable-libspeex —enable-nonfree —disable-stripping —enable-avfilter —enable-libdirac —disable-decoder=libdirac —enable-libfreetype —enable-libschroedinger —disable-encoder=libschroedinger —enable-version3 —enable-libopenjpeg —enable-libvpx —enable-librtmp —extra-libs=-lgcrypt —disable-altivec —disable-armv5te —disable-armv6 —disable-vis
libavutil 50. 43. 0 / 50. 43. 0
libavcodec 52.123. 0 / 52.123. 0
libavformat 52.111. 0 / 52.111. 0
libavdevice 52. 5. 0 / 52. 5. 0
libavfilter 1. 80. 0 / 1. 80. 0
libswscale 0. 14. 1 / 0. 14. 1
libpostproc 51. 2. 0 / 51. 2. 0
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/tmp/standort_aquarium.mp4' :
Metadata :
major_brand : mp42
minor_version : 0
compatible_brands : mp423gp4isom
creation_time : 2013-04-19 15:04:05
Duration : 00:00:18.24, start : 0.000000, bitrate : 2095 kb/s
Stream #0.0(und) : Video : mpeg4, yuv420p, 640x480 [PAR 1:1 DAR 4:3], 2001 kb/s, 14.97 fps, 30k tbr, 30k tbn, 30k tbc
Metadata :
creation_time : 2013-04-19 15:04:05
Stream #0.1(und) : Audio : aac, 48000 Hz, mono, s16, 96 kb/s
Metadata :
creation_time : 2013-04-19 15:04:05
File 'output.mp4' already exists. Overwrite ? [y/N] y
[buffer @ 0x132e820] w:640 h:480 pixfmt:yuv420p tb:1/1000000 sar:1/1 sws_param :
[libx264 @ 0x1325de0] Default settings detected, using medium profile
[libx264 @ 0x1325de0] using SAR=1/1
[libx264 @ 0x1325de0] using cpu capabilities : MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2
[libx264 @ 0x1325de0] profile High, level 3.0
[libx264 @ 0x1325de0] 264 - core 118 - H.264/MPEG-4 AVC codec - Copyleft 2003-2011 - http://www.videolan.org/x264.html - options : cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'output.mp4' :
Metadata :
major_brand : mp42
minor_version : 0
compatible_brands : mp423gp4isom
creation_time : 2013-04-19 15:04:05
encoder : Lavf52.111.0
Stream #0.0(und) : Video : libx264, yuv420p, 640x480 [PAR 1:1 DAR 4:3], q=2-31, 200 kb/s, 30k tbn, 29.97 tbc
Metadata :
creation_time : 2013-04-19 15:04:05
Stream #0.1(und) : Audio : libfaac, 48000 Hz, mono, s16, 64 kb/s
Metadata :
creation_time : 2013-04-19 15:04:05
Stream mapping :
Stream #0.0 -> #0.0
Stream #0.1 -> #0.1
Press [q] to stop, [?] for help
frame= 542 fps= 36 q=29.0 Lsize= 2059kB time=00:00:18.01 bitrate= 936.3kbits/s dup=270 drop=0
video:1904kB audio:143kB global headers:0kB muxing overhead 0.609224%
frame I:3 Avg QP:22.39 size : 14773
[libx264 @ 0x1325de0] frame P:514 Avg QP:23.98 size : 3675
[libx264 @ 0x1325de0] frame B:25 Avg QP:27.44 size : 643
[libx264 @ 0x1325de0] consecutive B-frames : 93.7% 0.0% 1.1% 5.2%
[libx264 @ 0x1325de0] mb I I16..4 : 16.4% 78.3% 5.3%
[libx264 @ 0x1325de0] mb P I16..4 : 1.6% 6.3% 0.3% P16..4 : 30.8% 8.6% 3.1% 0.0% 0.0% skip:49.4%
[libx264 @ 0x1325de0] mb B I16..4 : 0.4% 0.7% 0.0% B16..8 : 13.2% 1.6% 0.2% direct : 0.3% skip:83.6% L0:50.0% L1:47.1% BI : 2.9%
[libx264 @ 0x1325de0] 8x8 transform intra:77.1% inter:83.1%
[libx264 @ 0x1325de0] coded y,uvDC,uvAC intra : 62.0% 76.4% 24.4% inter : 17.9% 26.3% 2.3%
[libx264 @ 0x1325de0] i16 v,h,dc,p : 14% 60% 13% 13%
[libx264 @ 0x1325de0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu : 15% 35% 33% 2% 3% 3% 3% 3% 4%
[libx264 @ 0x1325de0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu : 15% 40% 12% 4% 7% 7% 7% 5% 4%
[libx264 @ 0x1325de0] i8c dc,h,v,p : 46% 34% 16% 4%
[libx264 @ 0x1325de0] Weighted P-Frames : Y:8.0% UV:4.5%
[libx264 @ 0x1325de0] ref P L0 : 65.6% 16.7% 8.8% 7.9% 0.9%
[libx264 @ 0x1325de0] ref B L0 : 85.9% 13.3% 0.8%
[libx264 @ 0x1325de0] ref B L1 : 88.7% 11.3%
[libx264 @ 0x1325de0] kb/s:862.28The video source is temporarily available under : https://www.dropbox.com/s/4xg147z77u40g87/standort_aquarium.mp4
-
FFMPEG : cannot play MPEG4 video encoded from images. Duration and bitrate undefined
17 juin 2013, par KaiKI've been trying to set a H264 video stream created from images, into an MPEG4 container. I've been able to get the video stream from images successfully. But when muxing it in the container, I must do something wrong because no player is able to reproduce it, despite ffplay - that plays the video until the end and after that, the image gets frozen until the eternity -.
The ffplay cannot identify Duration neither bitrate, so I supose it might be an issue related with dts and pts, but I've searched about how to solve it with no success.
Here's the ffplay output :
~$ ffplay testContainer.mp4
ffplay version git-2012-01-31-c673671 Copyright (c) 2003-2012 the FFmpeg developers
built on Feb 7 2012 20:32:12 with gcc 4.4.3
configuration: --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable- libfaac --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-x11grab --enable-libvpx --enable-libmp3lame --enable-debug=3
libavutil 51. 36.100 / 51. 36.100
libavcodec 54. 0.102 / 54. 0.102
libavformat 54. 0.100 / 54. 0.100
libavdevice 53. 4.100 / 53. 4.100
libavfilter 2. 60.100 / 2. 60.100
libswscale 2. 1.100 / 2. 1.100
libswresample 0. 6.100 / 0. 6.100
libpostproc 52. 0.100 / 52. 0.100
[h264 @ 0xa4849c0] max_analyze_duration 5000000 reached at 5000000
[h264 @ 0xa4849c0] Estimating duration from bitrate, this may be inaccurate
Input #0, h264, from 'testContainer.mp4':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: h264 (High), yuv420p, 512x512, 25 fps, 25 tbr, 1200k tbn, 50 tbc
2.74 A-V: 0.000 fd= 0 aq= 0KB vq= 160KB sq= 0B f=0/0 0/0Structure
My code is C++ styled, so I've a class that handles all the encoding, and then a main that initilize it, passes some images in a bucle, and finally notify the end of the process as following :
int main (int argc, const char * argv[])
{
MyVideoEncoder* videoEncoder = new MyVideoEncoder(512, 512, 512, 512, "output/testContainer.mp4", 25, 20);
if(!videoEncoder->initWithCodec(MyVideoEncoder::H264))
{
std::cout << "something really bad happened. Exit!!" << std::endl;
exit(-1);
}
/* encode 1 second of video */
for(int i=0;i<228;i++) {
std::stringstream filepath;
filepath << "input2/image" << i << ".jpg";
videoEncoder->encodeFrameFromJPG(const_cast(filepath.str().c_str()));
}
videoEncoder->endEncoding();
}Hints
I've seen a lot of examples about decoding of a video and encoding into another, but no working example of muxing a video from the scratch, so I'm not sure how to proceed with the pts and dts packet values. That's the reason why I suspect the issue must be in the following method :
bool MyVideoEncoder::encodeImageAsFrame(){
bool res = false;
pTempFrame->pts = frameCount * frameRate * 90; //90Hz by the standard for PTS-values
frameCount++;
/* encode the image */
out_size = avcodec_encode_video(pVideoStream->codec, outbuf, outbuf_size, pTempFrame);
if (out_size > 0) {
AVPacket pkt;
av_init_packet(&pkt);
pkt.pts = pkt.dts = 0;
if (pVideoStream->codec->coded_frame->pts != AV_NOPTS_VALUE) {
pkt.pts = av_rescale_q(pVideoStream->codec->coded_frame->pts,
pVideoStream->codec->time_base, pVideoStream->time_base);
pkt.dts = pTempFrame->pts;
}
if (pVideoStream->codec->coded_frame->key_frame) {
pkt.flags |= AV_PKT_FLAG_KEY;
}
pkt.stream_index = pVideoStream->index;
pkt.data = outbuf;
pkt.size = out_size;
res = (av_interleaved_write_frame(pFormatContext, &pkt) == 0);
}
return res;
}Any help or insight would be appreciated. Thanks in advance !!
P.S. The rest of the code, where config is done, is the following :
// MyVideoEncoder.cpp
#include "MyVideoEncoder.h"
#include "Image.hpp"
#include <cstring>
#include <sstream>
#include
#define MAX_AUDIO_PACKET_SIZE (128 * 1024)
MyVideoEncoder::MyVideoEncoder(int inwidth, int inheight,
int outwidth, int outheight, char* fileOutput, int framerate,
int compFactor) {
inWidth = inwidth;
inHeight = inheight;
outWidth = outwidth;
outHeight = outheight;
pathToMovie = fileOutput;
frameRate = framerate;
compressionFactor = compFactor;
frameCount = 0;
}
MyVideoEncoder::~MyVideoEncoder() {
}
bool MyVideoEncoder::initWithCodec(
MyVideoEncoder::encoderType type) {
if (!initializeEncoder(type))
return false;
if (!configureFrames())
return false;
return true;
}
bool MyVideoEncoder::encodeFrameFromJPG(char* filepath) {
setJPEGImage(filepath);
return encodeImageAsFrame();
}
bool MyVideoEncoder::encodeDelayedFrames(){
bool res = false;
while(out_size > 0)
{
pTempFrame->pts = frameCount * frameRate * 90; //90Hz by the standard for PTS-values
frameCount++;
out_size = avcodec_encode_video(pVideoStream->codec, outbuf, outbuf_size, NULL);
if (out_size > 0)
{
AVPacket pkt;
av_init_packet(&pkt);
pkt.pts = pkt.dts = 0;
if (pVideoStream->codec->coded_frame->pts != AV_NOPTS_VALUE) {
pkt.pts = av_rescale_q(pVideoStream->codec->coded_frame->pts,
pVideoStream->codec->time_base, pVideoStream->time_base);
pkt.dts = pTempFrame->pts;
}
if (pVideoStream->codec->coded_frame->key_frame) {
pkt.flags |= AV_PKT_FLAG_KEY;
}
pkt.stream_index = pVideoStream->index;
pkt.data = outbuf;
pkt.size = out_size;
res = (av_interleaved_write_frame(pFormatContext, &pkt) == 0);
}
}
return res;
}
void MyVideoEncoder::endEncoding() {
encodeDelayedFrames();
closeEncoder();
}
bool MyVideoEncoder::setJPEGImage(char* imgFilename) {
Image* rgbImage = new Image();
rgbImage->read_jpeg_image(imgFilename);
bool ret = setImageFromRGBArray(rgbImage->get_data());
delete rgbImage;
return ret;
}
bool MyVideoEncoder::setImageFromRGBArray(unsigned char* data) {
memcpy(pFrameRGB->data[0], data, 3 * inWidth * inHeight);
int ret = sws_scale(img_convert_ctx, pFrameRGB->data, pFrameRGB->linesize,
0, inHeight, pTempFrame->data, pTempFrame->linesize);
pFrameRGB->pts++;
if (ret)
return true;
else
return false;
}
bool MyVideoEncoder::initializeEncoder(encoderType type) {
av_register_all();
pTempFrame = avcodec_alloc_frame();
pTempFrame->pts = 0;
pOutFormat = NULL;
pFormatContext = NULL;
pVideoStream = NULL;
pAudioStream = NULL;
bool res = false;
// Create format
switch (type) {
case MyVideoEncoder::H264:
pOutFormat = av_guess_format("h264", NULL, NULL);
break;
case MyVideoEncoder::MPEG1:
pOutFormat = av_guess_format("mpeg", NULL, NULL);
break;
default:
pOutFormat = av_guess_format(NULL, pathToMovie.c_str(), NULL);
break;
}
if (!pOutFormat) {
pOutFormat = av_guess_format(NULL, pathToMovie.c_str(), NULL);
if (!pOutFormat) {
std::cout << "output format not found" << std::endl;
return false;
}
}
// allocate context
pFormatContext = avformat_alloc_context();
if(!pFormatContext)
{
std::cout << "cannot alloc format context" << std::endl;
return false;
}
pFormatContext->oformat = pOutFormat;
memcpy(pFormatContext->filename, pathToMovie.c_str(), min( (const int) pathToMovie.length(), (const int)sizeof(pFormatContext->filename)));
//Add video and audio streams
pVideoStream = AddVideoStream(pFormatContext,
pOutFormat->video_codec);
// Set the output parameters
av_dump_format(pFormatContext, 0, pathToMovie.c_str(), 1);
// Open Video stream
if (pVideoStream) {
res = openVideo(pFormatContext, pVideoStream);
}
if (res && !(pOutFormat->flags & AVFMT_NOFILE)) {
if (avio_open(&pFormatContext->pb, pathToMovie.c_str(), AVIO_FLAG_WRITE) < 0) {
res = false;
std::cout << "Cannot open output file" << std::endl;
}
}
if (res) {
avformat_write_header(pFormatContext,NULL);
}
else{
freeMemory();
std::cout << "Cannot init encoder" << std::endl;
}
return res;
}
AVStream *MyVideoEncoder::AddVideoStream(AVFormatContext *pContext, CodecID codec_id)
{
AVCodecContext *pCodecCxt = NULL;
AVStream *st = NULL;
st = avformat_new_stream(pContext, NULL);
if (!st)
{
std::cout << "Cannot add new video stream" << std::endl;
return NULL;
}
st->id = 0;
pCodecCxt = st->codec;
pCodecCxt->codec_id = (CodecID)codec_id;
pCodecCxt->codec_type = AVMEDIA_TYPE_VIDEO;
pCodecCxt->frame_number = 0;
// Put sample parameters.
pCodecCxt->bit_rate = outWidth * outHeight * 3 * frameRate/ compressionFactor;
pCodecCxt->width = outWidth;
pCodecCxt->height = outHeight;
/* frames per second */
pCodecCxt->time_base= (AVRational){1,frameRate};
/* pixel format must be YUV */
pCodecCxt->pix_fmt = PIX_FMT_YUV420P;
if (pCodecCxt->codec_id == CODEC_ID_H264)
{
av_opt_set(pCodecCxt->priv_data, "preset", "slow", 0);
av_opt_set(pCodecCxt->priv_data, "vprofile", "baseline", 0);
pCodecCxt->max_b_frames = 16;
}
if (pCodecCxt->codec_id == CODEC_ID_MPEG1VIDEO)
{
pCodecCxt->mb_decision = 1;
}
if(pContext->oformat->flags & AVFMT_GLOBALHEADER)
{
pCodecCxt->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
pCodecCxt->coder_type = 1; // coder = 1
pCodecCxt->flags|=CODEC_FLAG_LOOP_FILTER; // flags=+loop
pCodecCxt->me_range = 16; // me_range=16
pCodecCxt->gop_size = 50; // g=250
pCodecCxt->keyint_min = 25; // keyint_min=25
return st;
}
bool MyVideoEncoder::openVideo(AVFormatContext *oc, AVStream *pStream)
{
AVCodec *pCodec;
AVCodecContext *pContext;
pContext = pStream->codec;
// Find the video encoder.
pCodec = avcodec_find_encoder(pContext->codec_id);
if (!pCodec)
{
std::cout << "Cannot found video codec" << std::endl;
return false;
}
// Open the codec.
if (avcodec_open2(pContext, pCodec, NULL) < 0)
{
std::cout << "Cannot open video codec" << std::endl;
return false;
}
return true;
}
bool MyVideoEncoder::configureFrames() {
/* alloc image and output buffer */
outbuf_size = outWidth*outHeight*3;
outbuf = (uint8_t*) malloc(outbuf_size);
av_image_alloc(pTempFrame->data, pTempFrame->linesize, pVideoStream->codec->width,
pVideoStream->codec->height, pVideoStream->codec->pix_fmt, 1);
//Alloc RGB temp frame
pFrameRGB = avcodec_alloc_frame();
if (pFrameRGB == NULL)
return false;
avpicture_alloc((AVPicture *) pFrameRGB, PIX_FMT_RGB24, inWidth, inHeight);
pFrameRGB->pts = 0;
//Set SWS context to convert from RGB images to YUV images
if (img_convert_ctx == NULL) {
img_convert_ctx = sws_getContext(inWidth, inHeight, PIX_FMT_RGB24,
outWidth, outHeight, pVideoStream->codec->pix_fmt, /*SWS_BICUBIC*/
SWS_FAST_BILINEAR, NULL, NULL, NULL);
if (img_convert_ctx == NULL) {
fprintf(stderr, "Cannot initialize the conversion context!\n");
return false;
}
}
return true;
}
void MyVideoEncoder::closeEncoder() {
av_write_frame(pFormatContext, NULL);
av_write_trailer(pFormatContext);
freeMemory();
}
void MyVideoEncoder::freeMemory()
{
bool res = true;
if (pFormatContext)
{
// close video stream
if (pVideoStream)
{
closeVideo(pFormatContext, pVideoStream);
}
// Free the streams.
for(size_t i = 0; i < pFormatContext->nb_streams; i++)
{
av_freep(&pFormatContext->streams[i]->codec);
av_freep(&pFormatContext->streams[i]);
}
if (!(pFormatContext->flags & AVFMT_NOFILE) && pFormatContext->pb)
{
avio_close(pFormatContext->pb);
}
// Free the stream.
av_free(pFormatContext);
pFormatContext = NULL;
}
}
void MyVideoEncoder::closeVideo(AVFormatContext *pContext, AVStream *pStream)
{
avcodec_close(pStream->codec);
if (pTempFrame)
{
if (pTempFrame->data)
{
av_free(pTempFrame->data[0]);
pTempFrame->data[0] = NULL;
}
av_free(pTempFrame);
pTempFrame = NULL;
}
if (pFrameRGB)
{
if (pFrameRGB->data)
{
av_free(pFrameRGB->data[0]);
pFrameRGB->data[0] = NULL;
}
av_free(pFrameRGB);
pFrameRGB = NULL;
}
}
</sstream></cstring> -
FFMPEG : cannot play MPEG4 video encoded from images. Duration and bitrate undefined
17 juin 2013, par KaiKI've been trying to set a H264 video stream created from images, into an MPEG4 container. I've been able to get the video stream from images successfully. But when muxing it in the container, I must do something wrong because no player is able to reproduce it, despite ffplay - that plays the video until the end and after that, the image gets frozen until the eternity -.
The ffplay cannot identify Duration neither bitrate, so I supose it might be an issue related with dts and pts, but I've searched about how to solve it with no success.
Here's the ffplay output :
~$ ffplay testContainer.mp4
ffplay version git-2012-01-31-c673671 Copyright (c) 2003-2012 the FFmpeg developers
built on Feb 7 2012 20:32:12 with gcc 4.4.3
configuration: --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable- libfaac --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-x11grab --enable-libvpx --enable-libmp3lame --enable-debug=3
libavutil 51. 36.100 / 51. 36.100
libavcodec 54. 0.102 / 54. 0.102
libavformat 54. 0.100 / 54. 0.100
libavdevice 53. 4.100 / 53. 4.100
libavfilter 2. 60.100 / 2. 60.100
libswscale 2. 1.100 / 2. 1.100
libswresample 0. 6.100 / 0. 6.100
libpostproc 52. 0.100 / 52. 0.100
[h264 @ 0xa4849c0] max_analyze_duration 5000000 reached at 5000000
[h264 @ 0xa4849c0] Estimating duration from bitrate, this may be inaccurate
Input #0, h264, from 'testContainer.mp4':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: h264 (High), yuv420p, 512x512, 25 fps, 25 tbr, 1200k tbn, 50 tbc
2.74 A-V: 0.000 fd= 0 aq= 0KB vq= 160KB sq= 0B f=0/0 0/0Structure
My code is C++ styled, so I've a class that handles all the encoding, and then a main that initilize it, passes some images in a bucle, and finally notify the end of the process as following :
int main (int argc, const char * argv[])
{
MyVideoEncoder* videoEncoder = new MyVideoEncoder(512, 512, 512, 512, "output/testContainer.mp4", 25, 20);
if(!videoEncoder->initWithCodec(MyVideoEncoder::H264))
{
std::cout << "something really bad happened. Exit!!" << std::endl;
exit(-1);
}
/* encode 1 second of video */
for(int i=0;i<228;i++) {
std::stringstream filepath;
filepath << "input2/image" << i << ".jpg";
videoEncoder->encodeFrameFromJPG(const_cast(filepath.str().c_str()));
}
videoEncoder->endEncoding();
}Hints
I've seen a lot of examples about decoding of a video and encoding into another, but no working example of muxing a video from the scratch, so I'm not sure how to proceed with the pts and dts packet values. That's the reason why I suspect the issue must be in the following method :
bool MyVideoEncoder::encodeImageAsFrame(){
bool res = false;
pTempFrame->pts = frameCount * frameRate * 90; //90Hz by the standard for PTS-values
frameCount++;
/* encode the image */
out_size = avcodec_encode_video(pVideoStream->codec, outbuf, outbuf_size, pTempFrame);
if (out_size > 0) {
AVPacket pkt;
av_init_packet(&pkt);
pkt.pts = pkt.dts = 0;
if (pVideoStream->codec->coded_frame->pts != AV_NOPTS_VALUE) {
pkt.pts = av_rescale_q(pVideoStream->codec->coded_frame->pts,
pVideoStream->codec->time_base, pVideoStream->time_base);
pkt.dts = pTempFrame->pts;
}
if (pVideoStream->codec->coded_frame->key_frame) {
pkt.flags |= AV_PKT_FLAG_KEY;
}
pkt.stream_index = pVideoStream->index;
pkt.data = outbuf;
pkt.size = out_size;
res = (av_interleaved_write_frame(pFormatContext, &pkt) == 0);
}
return res;
}Any help or insight would be appreciated. Thanks in advance !!
P.S. The rest of the code, where config is done, is the following :
// MyVideoEncoder.cpp
#include "MyVideoEncoder.h"
#include "Image.hpp"
#include <cstring>
#include <sstream>
#include
#define MAX_AUDIO_PACKET_SIZE (128 * 1024)
MyVideoEncoder::MyVideoEncoder(int inwidth, int inheight,
int outwidth, int outheight, char* fileOutput, int framerate,
int compFactor) {
inWidth = inwidth;
inHeight = inheight;
outWidth = outwidth;
outHeight = outheight;
pathToMovie = fileOutput;
frameRate = framerate;
compressionFactor = compFactor;
frameCount = 0;
}
MyVideoEncoder::~MyVideoEncoder() {
}
bool MyVideoEncoder::initWithCodec(
MyVideoEncoder::encoderType type) {
if (!initializeEncoder(type))
return false;
if (!configureFrames())
return false;
return true;
}
bool MyVideoEncoder::encodeFrameFromJPG(char* filepath) {
setJPEGImage(filepath);
return encodeImageAsFrame();
}
bool MyVideoEncoder::encodeDelayedFrames(){
bool res = false;
while(out_size > 0)
{
pTempFrame->pts = frameCount * frameRate * 90; //90Hz by the standard for PTS-values
frameCount++;
out_size = avcodec_encode_video(pVideoStream->codec, outbuf, outbuf_size, NULL);
if (out_size > 0)
{
AVPacket pkt;
av_init_packet(&pkt);
pkt.pts = pkt.dts = 0;
if (pVideoStream->codec->coded_frame->pts != AV_NOPTS_VALUE) {
pkt.pts = av_rescale_q(pVideoStream->codec->coded_frame->pts,
pVideoStream->codec->time_base, pVideoStream->time_base);
pkt.dts = pTempFrame->pts;
}
if (pVideoStream->codec->coded_frame->key_frame) {
pkt.flags |= AV_PKT_FLAG_KEY;
}
pkt.stream_index = pVideoStream->index;
pkt.data = outbuf;
pkt.size = out_size;
res = (av_interleaved_write_frame(pFormatContext, &pkt) == 0);
}
}
return res;
}
void MyVideoEncoder::endEncoding() {
encodeDelayedFrames();
closeEncoder();
}
bool MyVideoEncoder::setJPEGImage(char* imgFilename) {
Image* rgbImage = new Image();
rgbImage->read_jpeg_image(imgFilename);
bool ret = setImageFromRGBArray(rgbImage->get_data());
delete rgbImage;
return ret;
}
bool MyVideoEncoder::setImageFromRGBArray(unsigned char* data) {
memcpy(pFrameRGB->data[0], data, 3 * inWidth * inHeight);
int ret = sws_scale(img_convert_ctx, pFrameRGB->data, pFrameRGB->linesize,
0, inHeight, pTempFrame->data, pTempFrame->linesize);
pFrameRGB->pts++;
if (ret)
return true;
else
return false;
}
bool MyVideoEncoder::initializeEncoder(encoderType type) {
av_register_all();
pTempFrame = avcodec_alloc_frame();
pTempFrame->pts = 0;
pOutFormat = NULL;
pFormatContext = NULL;
pVideoStream = NULL;
pAudioStream = NULL;
bool res = false;
// Create format
switch (type) {
case MyVideoEncoder::H264:
pOutFormat = av_guess_format("h264", NULL, NULL);
break;
case MyVideoEncoder::MPEG1:
pOutFormat = av_guess_format("mpeg", NULL, NULL);
break;
default:
pOutFormat = av_guess_format(NULL, pathToMovie.c_str(), NULL);
break;
}
if (!pOutFormat) {
pOutFormat = av_guess_format(NULL, pathToMovie.c_str(), NULL);
if (!pOutFormat) {
std::cout << "output format not found" << std::endl;
return false;
}
}
// allocate context
pFormatContext = avformat_alloc_context();
if(!pFormatContext)
{
std::cout << "cannot alloc format context" << std::endl;
return false;
}
pFormatContext->oformat = pOutFormat;
memcpy(pFormatContext->filename, pathToMovie.c_str(), min( (const int) pathToMovie.length(), (const int)sizeof(pFormatContext->filename)));
//Add video and audio streams
pVideoStream = AddVideoStream(pFormatContext,
pOutFormat->video_codec);
// Set the output parameters
av_dump_format(pFormatContext, 0, pathToMovie.c_str(), 1);
// Open Video stream
if (pVideoStream) {
res = openVideo(pFormatContext, pVideoStream);
}
if (res && !(pOutFormat->flags & AVFMT_NOFILE)) {
if (avio_open(&pFormatContext->pb, pathToMovie.c_str(), AVIO_FLAG_WRITE) < 0) {
res = false;
std::cout << "Cannot open output file" << std::endl;
}
}
if (res) {
avformat_write_header(pFormatContext,NULL);
}
else{
freeMemory();
std::cout << "Cannot init encoder" << std::endl;
}
return res;
}
AVStream *MyVideoEncoder::AddVideoStream(AVFormatContext *pContext, CodecID codec_id)
{
AVCodecContext *pCodecCxt = NULL;
AVStream *st = NULL;
st = avformat_new_stream(pContext, NULL);
if (!st)
{
std::cout << "Cannot add new video stream" << std::endl;
return NULL;
}
st->id = 0;
pCodecCxt = st->codec;
pCodecCxt->codec_id = (CodecID)codec_id;
pCodecCxt->codec_type = AVMEDIA_TYPE_VIDEO;
pCodecCxt->frame_number = 0;
// Put sample parameters.
pCodecCxt->bit_rate = outWidth * outHeight * 3 * frameRate/ compressionFactor;
pCodecCxt->width = outWidth;
pCodecCxt->height = outHeight;
/* frames per second */
pCodecCxt->time_base= (AVRational){1,frameRate};
/* pixel format must be YUV */
pCodecCxt->pix_fmt = PIX_FMT_YUV420P;
if (pCodecCxt->codec_id == CODEC_ID_H264)
{
av_opt_set(pCodecCxt->priv_data, "preset", "slow", 0);
av_opt_set(pCodecCxt->priv_data, "vprofile", "baseline", 0);
pCodecCxt->max_b_frames = 16;
}
if (pCodecCxt->codec_id == CODEC_ID_MPEG1VIDEO)
{
pCodecCxt->mb_decision = 1;
}
if(pContext->oformat->flags & AVFMT_GLOBALHEADER)
{
pCodecCxt->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
pCodecCxt->coder_type = 1; // coder = 1
pCodecCxt->flags|=CODEC_FLAG_LOOP_FILTER; // flags=+loop
pCodecCxt->me_range = 16; // me_range=16
pCodecCxt->gop_size = 50; // g=250
pCodecCxt->keyint_min = 25; // keyint_min=25
return st;
}
bool MyVideoEncoder::openVideo(AVFormatContext *oc, AVStream *pStream)
{
AVCodec *pCodec;
AVCodecContext *pContext;
pContext = pStream->codec;
// Find the video encoder.
pCodec = avcodec_find_encoder(pContext->codec_id);
if (!pCodec)
{
std::cout << "Cannot found video codec" << std::endl;
return false;
}
// Open the codec.
if (avcodec_open2(pContext, pCodec, NULL) < 0)
{
std::cout << "Cannot open video codec" << std::endl;
return false;
}
return true;
}
bool MyVideoEncoder::configureFrames() {
/* alloc image and output buffer */
outbuf_size = outWidth*outHeight*3;
outbuf = (uint8_t*) malloc(outbuf_size);
av_image_alloc(pTempFrame->data, pTempFrame->linesize, pVideoStream->codec->width,
pVideoStream->codec->height, pVideoStream->codec->pix_fmt, 1);
//Alloc RGB temp frame
pFrameRGB = avcodec_alloc_frame();
if (pFrameRGB == NULL)
return false;
avpicture_alloc((AVPicture *) pFrameRGB, PIX_FMT_RGB24, inWidth, inHeight);
pFrameRGB->pts = 0;
//Set SWS context to convert from RGB images to YUV images
if (img_convert_ctx == NULL) {
img_convert_ctx = sws_getContext(inWidth, inHeight, PIX_FMT_RGB24,
outWidth, outHeight, pVideoStream->codec->pix_fmt, /*SWS_BICUBIC*/
SWS_FAST_BILINEAR, NULL, NULL, NULL);
if (img_convert_ctx == NULL) {
fprintf(stderr, "Cannot initialize the conversion context!\n");
return false;
}
}
return true;
}
void MyVideoEncoder::closeEncoder() {
av_write_frame(pFormatContext, NULL);
av_write_trailer(pFormatContext);
freeMemory();
}
void MyVideoEncoder::freeMemory()
{
bool res = true;
if (pFormatContext)
{
// close video stream
if (pVideoStream)
{
closeVideo(pFormatContext, pVideoStream);
}
// Free the streams.
for(size_t i = 0; i < pFormatContext->nb_streams; i++)
{
av_freep(&pFormatContext->streams[i]->codec);
av_freep(&pFormatContext->streams[i]);
}
if (!(pFormatContext->flags & AVFMT_NOFILE) && pFormatContext->pb)
{
avio_close(pFormatContext->pb);
}
// Free the stream.
av_free(pFormatContext);
pFormatContext = NULL;
}
}
void MyVideoEncoder::closeVideo(AVFormatContext *pContext, AVStream *pStream)
{
avcodec_close(pStream->codec);
if (pTempFrame)
{
if (pTempFrame->data)
{
av_free(pTempFrame->data[0]);
pTempFrame->data[0] = NULL;
}
av_free(pTempFrame);
pTempFrame = NULL;
}
if (pFrameRGB)
{
if (pFrameRGB->data)
{
av_free(pFrameRGB->data[0]);
pFrameRGB->data[0] = NULL;
}
av_free(pFrameRGB);
pFrameRGB = NULL;
}
}
</sstream></cstring>