
Recherche avancée
Autres articles (62)
-
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Les formats acceptés
28 janvier 2010, parLes commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
ffmpeg -codecs ffmpeg -formats
Les format videos acceptés en entrée
Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
Les formats vidéos de sortie possibles
Dans un premier temps on (...)
Sur d’autres sites (7445)
-
microdvd : do not export framerate hint as subtitle packet
8 avril 2015, par wm4microdvd : do not export framerate hint as subtitle packet
MicroDVD has a "hack" for specifying the video framerate the subtitle
was authored against. The demuxer reads this hint correctly, but didn’t
skip it correctly.This was not noticed, because the exported packet has its duration set
to 0, making it invisible (depending on the API user’s rendering logic).Signed-off-by : Michael Niedermayer <michaelni@gmx.at>
-
How to convert MP3 to AMR using ffmpeg in windows commandline
13 août 2017, par KumaresanUsing windows based ffmpeg to convert MP3 to AMR.
For some reason it fails with error as given below.Don’t know how to given the correct parameters for AMR.
C:\Program Files (x86)\AMR to MP3 Converter>ffmpeg -i mfile.mp3 -ar 8000 -ab 12.2k audio.amr
FFmpeg version SVN-r26400, Copyright (c) 2000-2011 the FFmpeg developers
built on Jan 18 2011 04:07:05 with gcc 4.4.2
configuration: --enable-gpl --enable-version3 --enable-libgsm --enable-libvorb
is --enable-libtheora --enable-libspeex --enable-libmp3lame --enable-libopenjpeg
--enable-libschroedinger --enable-libopencore_amrwb --enable-libopencore_amrnb
--enable-libvpx --disable-decoder=libvpx --arch=x86 --enable-runtime-cpudetect -
-enable-libxvid --enable-libx264 --enable-librtmp --extra-libs='-lrtmp -lpolarss
l -lws2_32 -lwinmm' --target-os=mingw32 --enable-avisynth --enable-w32threads --
cross-prefix=i686-mingw32- --cc='ccache i686-mingw32-gcc' --enable-memalign-hack
libavutil 50.36. 0 / 50.36. 0
libavcore 0.16. 1 / 0.16. 1
libavcodec 52.108. 0 / 52.108. 0
libavformat 52.93. 0 / 52.93. 0
libavdevice 52. 2. 3 / 52. 2. 3
libavfilter 1.74. 0 / 1.74. 0
libswscale 0.12. 0 / 0.12. 0
[mp3 @ 003abeb0] max_analyze_duration reached
[mp3 @ 003abeb0] Estimating duration from bitrate, this may be inaccurate
Input #0, mp3, from 'mfile.mp3':
Metadata:
title : SPL_TRACK
artist : Krishnan
album : Lord
genre : Lord
track : 5
Duration: 00:30:06.00, start: 0.000000, bitrate: 127 kb/s
Stream #0.0: Audio: mp3, 44100 Hz, 2 channels, s16, 128 kb/s
[libopencore_amrnb @ 017e0d20] Only mono supported
Output #0, amr, to 'audio.amr':
Stream #0.0: Audio: libopencore_amrnb, 8000 Hz, 2 channels, s16, 12 kb/s
Stream mapping:
Stream #0.0 -> #0.0
Error while opening encoder for output stream #0.0 - maybe incorrect parameters
such as bit_rate, rate, width or height -
FFMPEG : multiplexing streams with different duration
16 avril 2018, par Michael IVI am multiplexing video and audio streams. Video stream comes from generated image data. The audio stream comes from aac file. Some audio files are longer than total video time I set so my strategy to stop audio stream muxer when its time becomes larger than the total video time(the last one I control by number encoded video frames).
I won’t put here the whole setup code, but it is similar to muxing.c example from the latest FFMPEG repo. The only difference is that I use audio stream from file,as I said, not from synthetically generated encoded frame. I am pretty sure the issue is in my wrong sync during muxer loop.Here is what I do :
void AudioSetup(const char* audioInFileName)
{
AVOutputFormat* outputF = mOutputFormatContext->oformat;
auto audioCodecId = outputF->audio_codec;
if (audioCodecId == AV_CODEC_ID_NONE) {
return false;
}
audio_codec = avcodec_find_encoder(audioCodecId);
avformat_open_input(&mInputAudioFormatContext,
audioInFileName, 0, 0);
avformat_find_stream_info(mInputAudioFormatContext, 0);
av_dump_format(mInputAudioFormatContext, 0, audioInFileName, 0);
for (size_t i = 0; i < mInputAudioFormatContext->nb_streams; i++) {
if (mInputAudioFormatContext->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {
inAudioStream = mInputAudioFormatContext->streams[i];
AVCodecParameters *in_codecpar = inAudioStream->codecpar;
mAudioOutStream.st = avformat_new_stream(mOutputFormatContext, NULL);
mAudioOutStream.st->id = mOutputFormatContext->nb_streams - 1;
AVCodecContext* c = avcodec_alloc_context3(audio_codec);
mAudioOutStream.enc = c;
c->sample_fmt = audio_codec->sample_fmts[0];
avcodec_parameters_to_context(c, inAudioStream->codecpar);
//copyparams from input to autput audio stream:
avcodec_parameters_copy(mAudioOutStream.st->codecpar, inAudioStream->codecpar);
mAudioOutStream.st->time_base.num = 1;
mAudioOutStream.st->time_base.den = c->sample_rate;
c->time_base = mAudioOutStream.st->time_base;
if (mOutputFormatContext->oformat->flags & AVFMT_GLOBALHEADER) {
c->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
break;
}
}
}
void Encode()
{
int cc = av_compare_ts(mVideoOutStream.next_pts, mVideoOutStream.enc->time_base,
mAudioOutStream.next_pts, mAudioOutStream.enc->time_base);
if (mAudioOutStream.st == NULL || cc <= 0) {
uint8_t* data = GetYUVFrame();//returns ready video YUV frame to work with
int ret = 0;
AVPacket pkt = { 0 };
av_init_packet(&pkt);
pkt.size = packet->dataSize;
pkt.data = data;
const int64_t duration = av_rescale_q(1, mVideoOutStream.enc->time_base, mVideoOutStream.st->time_base);
pkt.duration = duration;
pkt.pts = mVideoOutStream.next_pts;
pkt.dts = mVideoOutStream.next_pts;
mVideoOutStream.next_pts += duration;
pkt.stream_index = mVideoOutStream.st->index;
ret = av_interleaved_write_frame(mOutputFormatContext, &pkt);
} else
if(audio_time < video_time) {
//5 - duration of video in seconds
AVRational r = { 60, 1 };
auto cmp= av_compare_ts(mAudioOutStream.next_pts, mAudioOutStream.enc->time_base, 5, r);
if (cmp >= 0) {
mAudioOutStream.next_pts = (int64_t)std::numeric_limits::max();
return true; //don't mux audio anymore
}
AVPacket a_pkt = { 0 };
av_init_packet(&a_pkt);
int ret = 0;
ret = av_read_frame(mInputAudioFormatContext, &a_pkt);
//if audio file is shorter than stop muxing when at the end of the file
if (ret == AVERROR_EOF) {
mAudioOutStream.next_pts = (int64_t)std::numeric_limits::max();
return true;
}
a_pkt.stream_index = mAudioOutStream.st->index;
av_packet_rescale_ts(&a_pkt, inAudioStream->time_base, mAudioOutStream.st->time_base);
mAudioOutStream.next_pts += a_pkt.pts;
ret = av_interleaved_write_frame(mOutputFormatContext, &a_pkt);
}
}Now, the video part is flawless. But if the audio track is longer than video duration, I am getting total video length longer by around 5% - 20%, and it is clear that audio is contributing to that as video frames are finished exactly where there’re supposed to be.
The closest ’hack’ I came with is this part :
AVRational r = { 60 ,1 };
auto cmp= av_compare_ts(mAudioOutStream.next_pts, mAudioOutStream.enc->time_base, 5, r);
if (cmp >= 0) {
mAudioOutStream.next_pts = (int64_t)std::numeric_limits::max();
return true;
}Here I was trying to compare
next_pts
of the audio stream with the total time set for video file,which is 5 seconds. By settingr = {60,1}
I am converting those seconds by the time_base of the audio stream. At least that’s what I believe I am doing. With this hack, I am getting very small deviation from the correct movie length when using standard AAC files,that’s sample rate of 44100,stereo. But if I test with more problematic samples,like AAC sample rate 16000,mono - then the video file adds almost a whole second to its size.
I will appreciate if someone can point out what I am doing wrong here.Important note : I don’t set duration on for any of the contexts. I control the termination of the muxing session, which is based on video frames count.The audio input stream has duration, of course, but it doesn’t help me as video duration is what defines the movie length.
UPDATE :
This is second bounty attempt.
UPDATE 2 :
Actually,my audio timestamp of den,num was wrong,while 1,1 is indeed the way to go,as explained by the answer. What was preventing it from working was a bug in this line (my bad) :
mAudioOutStream.next_pts += a_pkt.pts;
Which must be :
mAudioOutStream.next_pts = a_pkt.pts;
The bug resulted in exponential increment of pts,which caused very early reach to the end of stream (in terms of pts) and therefore caused the audio stream to be terminated much earlier than it supposed to be.