
Recherche avancée
Médias (91)
-
MediaSPIP Simple : futur thème graphique par défaut ?
26 septembre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Video
-
avec chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
sans chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
config chosen
13 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
SPIP - plugins - embed code - Exemple
2 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
Autres articles (81)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (7010)
-
C++ ffmpeg Queue input is backward in time while encoding
15 août 2022, par TurgutI've made a program that takes a video as an input, decodes it's video and audio data, then edits the video data and encodes both video and audio (audio remains unedited). I've managed to successfully get the edited video as an output so far, but when I add in the audio, I get an error that says
Queue input is backward in time
. I used the muxing example from ffmpegs doc/examples for encoding, here is what it looks like (I'm not including the video encoding parts since its working just fine) :

typedef struct {
 OutputStream video_st, audio_st;
 const AVOutputFormat *fmt;
 AVFormatContext *oc;
 int have_video, have_audio, encode_video, encode_audio;
 std::string name;
 } encode_info;

encode_info enc_inf;

void video_encoder::open_audio(AVFormatContext *oc, const AVCodec *codec,
 OutputStream *ost, AVDictionary *opt_arg)
{
 AVCodecContext *c;
 int nb_samples;
 int ret;
 AVDictionary *opt = NULL;

 c = ost->enc;

 /* open it */
 av_dict_copy(&opt, opt_arg, 0);
 ret = avcodec_open2(c, codec, &opt);
 av_dict_free(&opt);
 if (ret < 0) {
 fprintf(stderr, "Could not open audio codec: %s\n", ret);
 exit(1);
 }

 /* init signal generator */
 ost->t = 0;
 ost->tincr = 2 * M_PI * 110.0 / c->sample_rate;
 /* increment frequency by 110 Hz per second */
 ost->tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;

 if (c->codec->capabilities & AV_CODEC_CAP_VARIABLE_FRAME_SIZE)
 nb_samples = 10000;
 else
 nb_samples = c->frame_size;

 ost->frame = alloc_audio_frame(c->sample_fmt, c->channel_layout,
 c->sample_rate, nb_samples);
 ost->tmp_frame = alloc_audio_frame(AV_SAMPLE_FMT_S16, c->channel_layout,
 c->sample_rate, nb_samples);

 /* copy the stream parameters to the muxer */
 ret = avcodec_parameters_from_context(ost->st->codecpar, c);
 if (ret < 0) {
 fprintf(stderr, "Could not copy the stream parameters\n");
 exit(1);
 }

 /* create resampler context */
 ost->swr_ctx = swr_alloc();
 if (!ost->swr_ctx) {
 fprintf(stderr, "Could not allocate resampler context\n");
 exit(1);
 }

 /* set options */
 av_opt_set_int (ost->swr_ctx, "in_channel_count", c->channels, 0);
 av_opt_set_int (ost->swr_ctx, "in_sample_rate", c->sample_rate, 0);
 av_opt_set_sample_fmt(ost->swr_ctx, "in_sample_fmt", AV_SAMPLE_FMT_S16, 0);
 av_opt_set_int (ost->swr_ctx, "out_channel_count", c->channels, 0);
 av_opt_set_int (ost->swr_ctx, "out_sample_rate", c->sample_rate, 0);
 av_opt_set_sample_fmt(ost->swr_ctx, "out_sample_fmt", c->sample_fmt, 0);

 /* initialize the resampling context */
 if ((ret = swr_init(ost->swr_ctx)) < 0) {
 fprintf(stderr, "Failed to initialize the resampling context\n");
 exit(1);
 }
}


void video_encoder::encode_one_frame()
{
 if (enc_inf.encode_video || enc_inf.encode_audio) {
 /* select the stream to encode */
 if (enc_inf.encode_video &&
 (!enc_inf.encode_audio || av_compare_ts(enc_inf.video_st.next_pts, enc_inf.video_st.enc->time_base,
 enc_inf.audio_st.next_pts, enc_inf.audio_st.enc->time_base) <= 0)) {
 enc_inf.encode_video = !write_video_frame(enc_inf.oc, &enc_inf.video_st);
 } else {
 std::cout << "Encoding audio" << std::endl;
 enc_inf.encode_audio = !write_audio_frame(enc_inf.oc, &enc_inf.audio_st);
 }
 }
}

int video_encoder::write_audio_frame(AVFormatContext *oc, OutputStream *ost)
{
 AVCodecContext *c;
 AVFrame *frame;
 int ret;
 int dst_nb_samples;

 c = ost->enc;

 frame = audio_frame;//get_audio_frame(ost);

 if (frame) {
 /* convert samples from native format to destination codec format, using the resampler */
 /* compute destination number of samples */
 dst_nb_samples = av_rescale_rnd(swr_get_delay(ost->swr_ctx, c->sample_rate) + frame->nb_samples,
 c->sample_rate, c->sample_rate, AV_ROUND_UP);
 //av_assert0(dst_nb_samples == frame->nb_samples);

 /* when we pass a frame to the encoder, it may keep a reference to it
 * internally;
 * make sure we do not overwrite it here
 */
 ret = av_frame_make_writable(ost->frame);
 if (ret < 0)
 exit(1);

 /* convert to destination format */
 ret = swr_convert(ost->swr_ctx,
 ost->frame->data, dst_nb_samples,
 (const uint8_t **)frame->data, frame->nb_samples);
 if (ret < 0) {
 fprintf(stderr, "Error while converting\n");
 exit(1);
 }
 frame = ost->frame;

 frame->pts = av_rescale_q(ost->samples_count, (AVRational){1, c->sample_rate}, c->time_base);
 ost->samples_count += dst_nb_samples;
 }

 return write_frame(oc, c, ost->st, frame, ost->tmp_pkt);
}
void video_encoder::set_audio_frame(AVFrame* frame)
{
 audio_frame = frame;
}



Normally the muxing example above uses
get_audio_frame(ost)
for frame insdewrite_audio_frame
to create a dummy audio frame, but I want to use the audio that I have decoded from my input video. After decoding the audio frame I pass it to the encoder usingset_audio_frame
so my encoder can use it. Then I removedget_audio_frame(ost)
and simply replaced it withaudio_frame
. So here is what my main loop looks like :

...
open_audio(args);
...
while(current_second < ouput_duration)
{
...
 video_reader_read_frame(buffer, &pts, start_ts);
 edit_decoded_video(buffer);
 ...
 if(frame_type == 2)
 encoder->set_audio_frame(audio_test->get_frame());
 encoder->encode_one_frame();
}



And here is what my decoder looks like :


int video_decode::decode_audio(AVCodecContext *dec, const AVPacket *pkt)
 {
 auto& frame= state.av_frame;
 int ret = 0;
 
 // submit the packet to the decoder
 ret = avcodec_send_packet(dec, pkt);
 if (ret < 0) {
 std::cout << "Error submitting a packet for decoding" << std::endl;
 return ret;
 }
 
 // get all the available frames from the decoder
 while (ret >= 0) {
 ret = avcodec_receive_frame(dec, frame);
 if (ret < 0) {
 // those two return values are special and mean there is no output
 // frame available, but there were no errors during decoding
 if (ret == AVERROR_EOF || ret == AVERROR(EAGAIN))
 return 0;
 
 std::cout << "Decode err" << std::endl;
 return ret;
 }
 
 if (ret < 0)
 return ret;
 }
 
 return 0;
 }

 int video_decode::video_reader_read_frame(uint8_t* frame_buffer, int64_t* pts, double seg_start) 
 {
 // Unpack members of state
 auto& width = state.width;
 auto& height = state.height;
 auto& av_format_ctx = state.av_format_ctx;
 auto& av_codec_ctx = state.av_codec_ctx;
 auto& audio_codec_ctx = state.audio_codec_ctx;
 auto& video_stream_index = state.video_stream_index;
 auto& audio_stream_index = state.audio_stream_index;
 auto& av_frame = state.av_frame;
 auto& av_packet = state.av_packet;
 auto& sws_scaler_ctx = state.sws_scaler_ctx;

 // Decode one frame
 //double pt_in_seconds = (*pts) * (double)state.time_base.num / (double)state.time_base.den;
 if (!this->skipped) {
 this->skipped = true;
 *pts = (int64_t)(seg_start * (double)state.time_base.den / (double)state.time_base.num);
 video_reader_seek_frame(*pts);
 }

 int response;
 while (av_read_frame(av_format_ctx, av_packet) >= 0) {
 // Audio decode

 if (av_packet->stream_index == video_stream_index){
 std::cout << "Decoded VIDEO" << std::endl;

 response = avcodec_send_packet(av_codec_ctx, av_packet);
 if (response < 0) {
 printf("Failed to decode packet: %s\n", av_make_error(response));
 return false;
 }


 response = avcodec_receive_frame(av_codec_ctx, av_frame);
 if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
 av_packet_unref(av_packet);
 continue;
 } else if (response < 0) {
 printf("Failed to decode packet: %s\n", av_make_error(response));
 return false;
 }


 *pts = av_frame->pts;
 // Set up sws scaler
 if (!sws_scaler_ctx) {
 auto source_pix_fmt = correct_for_deprecated_pixel_format(av_codec_ctx->pix_fmt);
 sws_scaler_ctx = sws_getContext(width, height, source_pix_fmt,
 width, height, AV_PIX_FMT_RGB0,
 SWS_BICUBIC, NULL, NULL, NULL);
 }
 if (!sws_scaler_ctx) {
 printf("Couldn't initialize sw scaler\n");
 return false;
 }

 uint8_t* dest[4] = { frame_buffer, NULL, NULL, NULL };
 int dest_linesize[4] = { width * 4, 0, 0, 0 };
 sws_scale(sws_scaler_ctx, av_frame->data, av_frame->linesize, 0, height, dest, dest_linesize);
 av_packet_unref(av_packet);
 return 1;
 }
 if (av_packet->stream_index == audio_stream_index){
 std::cout << "Decoded AUDIO" << std::endl;
 decode_audio(audio_codec_ctx, av_packet);
 av_packet_unref(av_packet);
 return 2;
 }else {
 av_packet_unref(av_packet);
 continue;
 }

 av_packet_unref(av_packet);
 break;
 }


 return true;
 }
void init()
{
...
if (open_codec_context(&audio_stream_index, &audio_dec_ctx, av_format_ctx, AVMEDIA_TYPE_AUDIO) >= 0) {
 audio_stream = av_format_ctx->streams[audio_stream_index];
}
...
}



My decoder uses the same format context, packet and frame for video and audio decoding and uses separate stream and codec context.


Why am I getting
Queue input is backward in time
error and how can I propperly encode the audio ? I'm not sure but from the looks of it decodes the audio just fine. And again there are no problems on video encoding/decoding whatsoever.

-
aacenc_pred : rework the way prediction is done
29 août 2015, par Rostislav Pehlivanovaacenc_pred : rework the way prediction is done
This commit completely alters the algorithm of prediction.
The original commit which introduced prediction was completely
incorrect to even remotely care about what the actual coefficients
contain or whether any options were enabled. Not my actual fault.This commit treats prediction the way the decoder does and expects
to do : like lossy encryption. Everything related to prediction now
happens at the very end but just before quantization and encoding
of coefficients. On the decoder side, prediction happens before
anything has had a chance to even access the coefficients.Also the original implementation had problems because it actually
touched the band_type of special bands which already had their
scalefactor indices marked and it’s a wonder the asserion wasn’t
triggered when transmitting those.Overall, this now drastically increases audio quality and you should
think about enabling it if you don’t plan on playing anything encoded
on really old low power ultra-embedded devices since they might not
support decoding of prediction or AAC-Main. Though the specifications
were written ages ago and as times change so do the FLOPS.Signed-off-by : Rostislav Pehlivanov <atomnuker@gmail.com>
-
Very long ffmpeg command due to drawtext commands for every second
24 avril 2022, par principal-ideal-domainI have a video of about 20 minutes length and want to show every second a different text using the drawtext filter. I used a java software to compute a very long ffmpeg command (more than 100,000 characters long). Pasting it into the PowerShell took a long time and then I got the error


Program 'ffmpeg.exe' failed to run: The filename or extension is too longAt line:1 char:1



So the command is obviously too long. Can I somehow outsource it into an external file ? I'm not looking for the subtitles filter instead of the drawtext filter because I'm using special functionalities of drawtext.