
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (17)
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community. -
Les statuts des instances de mutualisation
13 mars 2010, parPour des raisons de compatibilité générale du plugin de gestion de mutualisations avec les fonctions originales de SPIP, les statuts des instances sont les mêmes que pour tout autre objets (articles...), seuls leurs noms dans l’interface change quelque peu.
Les différents statuts possibles sont : prepa (demandé) qui correspond à une instance demandée par un utilisateur. Si le site a déjà été créé par le passé, il est passé en mode désactivé. publie (validé) qui correspond à une instance validée par un (...)
Sur d’autres sites (3358)
-
Screen Recording with FFmpeg-Lib with c++
27 janvier 2020, par BaschdelI’m trying to record the whole desktop stream with FFmpeg on Windows.
I found a working example here. The Problem is that some og the functions depricated. So I tried to replace them with the updated ones.But there are some slight problems. The error "has triggered a breakpoint." occurse and also "not able to read the location."
The bigger problem is that I don’t know if this is the right way to do this..My code looks like this :
using namespace std;
/* initialize the resources*/
Recorder::Recorder()
{
av_register_all();
avcodec_register_all();
avdevice_register_all();
cout<<"\nall required functions are registered successfully";
}
/* uninitialize the resources */
Recorder::~Recorder()
{
avformat_close_input(&pAVFormatContext);
if( !pAVFormatContext )
{
cout<<"\nfile closed sucessfully";
}
else
{
cout<<"\nunable to close the file";
exit(1);
}
avformat_free_context(pAVFormatContext);
if( !pAVFormatContext )
{
cout<<"\navformat free successfully";
}
else
{
cout<<"\nunable to free avformat context";
exit(1);
}
}
/* establishing the connection between camera or screen through its respective folder */
int Recorder::openCamera()
{
value = 0;
options = NULL;
pAVFormatContext = NULL;
pAVFormatContext = avformat_alloc_context();//Allocate an AVFormatContext.
openScreen(pAVFormatContext);
/* set frame per second */
value = av_dict_set( &options,"framerate","30",0 );
if(value < 0)
{
cout<<"\nerror in setting dictionary value";
exit(1);
}
value = av_dict_set( &options, "preset", "medium", 0 );
if(value < 0)
{
cout<<"\nerror in setting preset values";
exit(1);
}
// value = avformat_find_stream_info(pAVFormatContext,NULL);
if(value < 0)
{
cout<<"\nunable to find the stream information";
exit(1);
}
VideoStreamIndx = -1;
/* find the first video stream index . Also there is an API available to do the below operations */
for(int i = 0; i < pAVFormatContext->nb_streams; i++ ) // find video stream posistion/index.
{
if( pAVFormatContext->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO )
{
VideoStreamIndx = i;
break;
}
}
if( VideoStreamIndx == -1)
{
cout<<"\nunable to find the video stream index. (-1)";
exit(1);
}
// assign pAVFormatContext to VideoStreamIndx
pAVCodecContext = pAVFormatContext->streams[VideoStreamIndx]->codec;
pAVCodec = avcodec_find_decoder(pAVCodecContext->codec_id);
if( pAVCodec == NULL )
{
cout<<"\nunable to find the decoder";
exit(1);
}
value = avcodec_open2(pAVCodecContext , pAVCodec , NULL);//Initialize the AVCodecContext to use the given AVCodec.
if( value < 0 )
{
cout<<"\nunable to open the av codec";
exit(1);
}
}
/* initialize the video output file and its properties */
int Recorder::init_outputfile()
{
outAVFormatContext = NULL;
value = 0;
output_file = "output.mp4";
avformat_alloc_output_context2(&outAVFormatContext, NULL, NULL, output_file);
if (!outAVFormatContext)
{
cout<<"\nerror in allocating av format output context";
exit(1);
}
/* Returns the output format in the list of registered output formats which best matches the provided parameters, or returns NULL if there is no match. */
output_format = av_guess_format(NULL, output_file ,NULL);
if( !output_format )
{
cout<<"\nerror in guessing the video format. try with correct format";
exit(1);
}
video_st = avformat_new_stream(outAVFormatContext ,NULL);
if( !video_st )
{
cout<<"\nerror in creating a av format new stream";
exit(1);
}
if (codec_id == AV_CODEC_ID_H264)
{
av_opt_set(outAVCodecContext->priv_data, "preset", "slow", 0);
}
outAVCodec = avcodec_find_encoder(AV_CODEC_ID_MPEG4);
if (!outAVCodec)
{
cout << "\nerror in finding the av codecs. try again with correct codec";
exit(1);
}
outAVCodecContext = avcodec_alloc_context3(outAVCodec);
if( !outAVCodecContext )
{
cout<<"\nerror in allocating the codec contexts";
exit(1);
}
/* set property of the video file */
outAVCodecContext = video_st->codec;
outAVCodecContext->codec_id = AV_CODEC_ID_MPEG4;// AV_CODEC_ID_MPEG4; // AV_CODEC_ID_H264 // AV_CODEC_ID_MPEG1VIDEO
outAVCodecContext->codec_type = AVMEDIA_TYPE_VIDEO;
outAVCodecContext->pix_fmt = AV_PIX_FMT_YUV420P;
outAVCodecContext->bit_rate = 400000; // 2500000
outAVCodecContext->width = 1920;
outAVCodecContext->height = 1080;
outAVCodecContext->gop_size = 3;
outAVCodecContext->max_b_frames = 2;
outAVCodecContext->time_base.num = 1;
outAVCodecContext->time_base.den = 30; //15fps
/* Some container formats (like MP4) require global headers to be present
Mark the encoder so that it behaves accordingly. */
if ( outAVFormatContext->oformat->flags & AVFMT_GLOBALHEADER)
{
outAVCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}
value = avcodec_open2(outAVCodecContext, outAVCodec, NULL);
if( value < 0)
{
cout<<"\nerror in opening the avcodec";
exit(1);
}
/* create empty video file */
if ( !(outAVFormatContext->flags & AVFMT_NOFILE) )
{
if( avio_open2(&outAVFormatContext->pb , output_file , AVIO_FLAG_WRITE ,NULL, NULL) < 0 )
{
cout<<"\nerror in creating the video file";
exit(1);
}
}
if(!outAVFormatContext->nb_streams)
{
cout<<"\noutput file dose not contain any stream";
exit(1);
}
/* imp: mp4 container or some advanced container file required header information*/
value = avformat_write_header(outAVFormatContext , &options);
if(value < 0)
{
cout<<"\nerror in writing the header context";
exit(1);
}
/*
// uncomment here to view the complete video file informations
cout<<"\n\nOutput file information :\n\n";
av_dump_format(outAVFormatContext , 0 ,output_file ,1);
*/
}
int Recorder::stop() {
threading = false;
demux->join();
rescale->join();
mux->join();
return 0;
}
int Recorder::start() {
initVideoThreads();
return 0;
}
int Recorder::initVideoThreads() {
demux = new thread(&Recorder::demuxVideoStream, this, pAVCodecContext, pAVFormatContext, VideoStreamIndx);
rescale = new thread(&Recorder::rescaleVideoStream, this, pAVCodecContext, outAVCodecContext);
demux = new thread(&Recorder::encodeVideoStream, this, outAVCodecContext);
return 0;
}
void Recorder::demuxVideoStream(AVCodecContext* codecContext, AVFormatContext* formatContext, int streamIndex)
{
// init packet
AVPacket* packet = (AVPacket*)av_malloc(sizeof(AVPacket));
av_init_packet(packet);
int ctr = 0;
while (threading)
{
if (av_read_frame(formatContext, packet) < 0) {
exit(1);
}
if (packet->stream_index == streamIndex)
{
int return_value; // = 0;
ctr++;
do
{
return_value = avcodec_send_packet(codecContext, packet);
} while (return_value == AVERROR(EAGAIN) && threading);
//int i = avcodec_send_packet(codecContext, packet);
if (return_value < 0 && threading) { // call Decoder
cout << "unable to decode video";
exit(1);
}
}
}
avcodec_send_packet(codecContext, NULL); // flush decoder
// return 0;
}
void Recorder::rescaleVideoStream(AVCodecContext* inCodecContext, AVCodecContext* outCodecContext)
{
bool closing = false;
AVFrame* inFrame = av_frame_alloc();
if (!inFrame)
{
cout << "\nunable to release the avframe resources";
exit(1);
}
int nbytes = av_image_get_buffer_size(outAVCodecContext->pix_fmt, outAVCodecContext->width, outAVCodecContext->height, 32);
uint8_t* video_outbuf = (uint8_t*)av_malloc(nbytes);
if (video_outbuf == NULL)
{
cout << "\nunable to allocate memory";
exit(1);
}
AVFrame* outFrame = av_frame_alloc();//Allocate an AVFrame and set its fields to default values.
if (!outFrame)
{
cout << "\nunable to release the avframe resources for outframe";
exit(1);
}
// Setup the data pointers and linesizes based on the specified image parameters and the provided array.
int value = av_image_fill_arrays(outFrame->data, outFrame->linesize, video_outbuf, AV_PIX_FMT_YUV420P, outAVCodecContext->width, outAVCodecContext->height, 1); // returns : the size in bytes required for src
if (value < 0)
{
cout << "\nerror in filling image array";
}
int ctr = 0;
while (threading || !closing) {
int value = avcodec_receive_frame(inCodecContext, inFrame);
if (value == 0) {
ctr++;
SwsContext* swsCtx_ = sws_getContext(inCodecContext->width,
inCodecContext->height,
inCodecContext->pix_fmt,
outAVCodecContext->width,
outAVCodecContext->height,
outAVCodecContext->pix_fmt,
SWS_BICUBIC, NULL, NULL, NULL);
sws_scale(swsCtx_, inFrame->data, inFrame->linesize, 0, inCodecContext->height, outFrame->data, outFrame->linesize);
int return_value;
do
{
return_value = avcodec_send_frame(outCodecContext, outFrame);
} while (return_value == AVERROR(EAGAIN) && threading);
}
closing = (value == AVERROR_EOF);
}
avcodec_send_frame(outCodecContext, NULL);
// av_free(video_outbuf);
// return 0;
}
void Recorder::encodeVideoStream(AVCodecContext* codecContext)
{
bool closing = true;
AVPacket* packet = (AVPacket*)av_malloc(sizeof(AVPacket));
av_init_packet(packet);
int ctr = 0;
while (threading || !closing) {
packet->data = NULL; // packet data will be allocated by the encoder
packet->size = 0;
ctr++;
int value = avcodec_receive_packet(codecContext, packet);
if (value == 0) {
if (packet->pts != AV_NOPTS_VALUE)
packet->pts = av_rescale_q(packet->pts, video_st->codec->time_base, video_st->time_base);
if (packet->dts != AV_NOPTS_VALUE)
packet->dts = av_rescale_q(packet->dts, video_st->codec->time_base, video_st->time_base);
//printf("Write frame %3d (size= %2d)\n", j++, packet->size / 1000);
if (av_write_frame(outAVFormatContext, packet) != 0)
{
cout << "\nerror in writing video frame";
}
}
closing = (value == AVERROR_EOF);
}
value = av_write_trailer(outAVFormatContext);
if (value < 0)
{
cout << "\nerror in writing av trailer";
exit(1);
}
// av_free(packet);
// return 0;
}
int Recorder::openScreen(AVFormatContext* pFormatCtx) {
/*
X11 video input device.
To enable this input device during configuration you need libxcb installed on your system. It will be automatically detected during configuration.
This device allows one to capture a region of an X11 display.
refer : https://www.ffmpeg.org/ffmpeg-devices.html#x11grab
*/
/* current below is for screen recording. to connect with camera use v4l2 as a input parameter for av_find_input_format */
pAVInputFormat = av_find_input_format("gdigrab");
//value = avformat_open_input(&pAVFormatContext, ":0.0+10,250", pAVInputFormat, NULL);
value = avformat_open_input(&pAVFormatContext, "desktop", pAVInputFormat, NULL);
if (value != 0)
{
cout << "\nerror in opening input device";
exit(1);
}
return 0;
} -
FFMPEG Audio/video out of sync after cutting and concatonating even after transcoding
4 mai 2020, par Ham789I am attempting to take cuts from a set of videos and concatonate them together with the concat demuxer.



However, the audio is out of sync of the video in the output. The audio seems to drift further out of sync as the video progresses. Interestingly, if I click to seek another time in the video with the progress bar on the player, the audio becomes synced up with the video but then gradually drifts out of sync again. Seeking to a new time in the player seems to reset the audio/video. It is like they are being played back at different rates or something. I get this behaviour in both Quicktime and VLC players.



For each video, I decode it, trim a clip from it and then encode it to 4k resolution at 25 fps with its audio :



ffmpeg -ss 0.5 -t 0.5 -i input_video1.mp4 -r 25 -vf scale=3840:2160 output_video1.mp4



I then take each of these videos and concatonate them together with the concat demuxer :



ffmpeg -f concat -safe 0 -i cut_videos.txt -c copy -y output.mp4



I am taking short cuts of each video (approximately 0.5s)



I am using Python's subprocess to automate the cutting and concatonating of the videos.



I am not sure if this happens because of the trimming or concatenation steps but when I play back the intermediate cut video files (
output_video1.mp4
in the above command), there seems to be some silence before the audio comes in at the start of the video.


When I concatonate the videos, I sometimes get a lot of these warnings however the audio still becomes out of sync even when I do not get them :



[mp4 @ 0000021a252ce080] Non-monotonous DTS in output stream 0:1; previous: 51792, current: 50009; changing to 51793. This may result in incorrect timestamps in the output file.



From this post, it seems to be a problem with cutting the videos and their timestamps. The solution proposed in the post is to decode, cut and then encode the video however I am already doing that.



How can I ensure the audio and video are in sync ? Am I transcoding incorrectly ? This seems to be the only solution I can find online however it does not seem to work.



UPDATE :



I took inspiration from this post and seperated the audio and video from
output_video1.mp4
using :


ffmpeg -i output_video1.mp4 -acodec copy -vn video.mp4



and



ffmpeg -i output_video1.mp4 -vcodec copy -an audio.mp4



I then compared the durations of
video.mp4
andaudio.mp4
and got 0.57s and 0.52s respectively. Since the video is longer, this explains why there is a period of silence in the videos. The post then suggests transcoding is the solution however as you can see from the code above that does not work for me.


Sample Output Log for the Trim Command



built with Apple LLVM version 10.0.0 (clang-1000.11.45.5)
 configuration: --prefix=/usr/local/Cellar/ffmpeg/4.2.2 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags='-I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.0.1.jdk/Contents/Home/include -I/Library/Java/JavaVirtualMachines/adoptopenjdk-13.0.1.jdk/Contents/Home/include/darwin' --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr --enable-videotoolbox --disable-libjack --disable-indev=jack
 libavutil 56. 31.100 / 56. 31.100
 libavcodec 58. 54.100 / 58. 54.100
 libavformat 58. 29.100 / 58. 29.100
 libavdevice 58. 8.100 / 58. 8.100
 libavfilter 7. 57.100 / 7. 57.100
 libavresample 4. 0. 0 / 4. 0. 0
 libswscale 5. 5.100 / 5. 5.100
 libswresample 3. 5.100 / 3. 5.100
 libpostproc 55. 5.100 / 55. 5.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input_video1.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.29.100
 Duration: 00:00:04.06, start: 0.000000, bitrate: 14266 kb/s
 Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 3840x2160, 14268 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)
 Metadata:
 handler_name : Core Media Video
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 94 kb/s (default)
 Metadata:
 handler_name : Core Media Audio
File 'output_video1.mp4' already exists. Overwrite ? [y/N] y
Stream mapping:
 Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
 Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, [?] for help
[libx264 @ 0x7fcae4001e00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x7fcae4001e00] profile High, level 5.1
[libx264 @ 0x7fcae4001e00] 264 - core 155 r2917 0a84d98 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'output_video1.mp4':
 Metadata:
 major_brand : isom
 minor_version : 512
 compatible_brands: isomiso2avc1mp41
 encoder : Lavf58.29.100
 Stream #0:0(und): Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 3840x2160, q=-1--1, 25 fps, 12800 tbn, 25 tbc (default)
 Metadata:
 handler_name : Core Media Video
 encoder : Lavc58.54.100 libx264
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
 Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 69 kb/s (default)
 Metadata:
 handler_name : Core Media Audio
 encoder : Lavc58.54.100 aac
frame= 14 fps=7.0 q=-1.0 Lsize= 928kB time=00:00:00.51 bitrate=14884.2kbits/s dup=0 drop=1 speed=0.255x 
video:922kB audio:5kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.194501%
[libx264 @ 0x7fcae4001e00] frame I:1 Avg QP:21.06 size:228519
[libx264 @ 0x7fcae4001e00] frame P:4 Avg QP:22.03 size: 85228
[libx264 @ 0x7fcae4001e00] frame B:9 Avg QP:22.88 size: 41537
[libx264 @ 0x7fcae4001e00] consecutive B-frames: 14.3% 0.0% 0.0% 85.7%
[libx264 @ 0x7fcae4001e00] mb I I16..4: 27.6% 64.3% 8.1%
[libx264 @ 0x7fcae4001e00] mb P I16..4: 9.1% 10.7% 0.2% P16..4: 48.5% 7.3% 3.9% 0.0% 0.0% skip:20.2%
[libx264 @ 0x7fcae4001e00] mb B I16..4: 1.1% 1.0% 0.0% B16..8: 44.5% 2.9% 0.2% direct: 8.3% skip:42.0% L0:45.6% L1:53.2% BI: 1.2%
[libx264 @ 0x7fcae4001e00] 8x8 transform intra:58.2% inter:93.4%
[libx264 @ 0x7fcae4001e00] coded y,uvDC,uvAC intra: 31.4% 62.2% 5.2% inter: 11.4% 30.9% 0.0%
[libx264 @ 0x7fcae4001e00] i16 v,h,dc,p: 15% 52% 12% 21%
[libx264 @ 0x7fcae4001e00] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 19% 33% 32% 2% 2% 2% 4% 2% 4%
[libx264 @ 0x7fcae4001e00] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 20% 39% 9% 3% 4% 4% 12% 3% 4%
[libx264 @ 0x7fcae4001e00] i8c dc,h,v,p: 43% 36% 18% 3%
[libx264 @ 0x7fcae4001e00] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0x7fcae4001e00] ref P L0: 69.3% 8.0% 14.8% 7.9%
[libx264 @ 0x7fcae4001e00] ref B L0: 88.1% 9.2% 2.6%
[libx264 @ 0x7fcae4001e00] ref B L1: 90.2% 9.8%
[libx264 @ 0x7fcae4001e00] kb/s:13475.29
[aac @ 0x7fcae4012400] Qavg: 125.000```



-
5 perfect feature combinations to use with Heatmaps and Session Recordings
28 janvier 2020, par Jake Thornton — Uncategorized