
Recherche avancée
Autres articles (6)
-
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community. -
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Le plugin : Podcasts.
14 juillet 2010, parLe problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
Types de fichiers supportés dans les flux
Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)
Sur d’autres sites (4206)
-
Revision ccef8842d2 : Allow full coeff probability model and cost update This commit moves the simpli
13 août 2014, par Jingning HanChanged Paths :
Modify /vp9/encoder/vp9_speed_features.c
Allow full coeff probability model and cost updateThis commit moves the simplified coefficient probability model
and costing update to speed 4, and turns on chessboard pattern
mode search for sub 720p sequences. The overall coding performance
of speed 3 is improved :
derf 0.889%
stdhd 1.744%The speed 3 runtime for test sequences are improved :
bus cif at 1000 kbps 9823 ms -> 9642 ms
pedestrian 1080p 2000 kbps 189559 ms -> 183284 msChange-Id : Iecbc7496a68f31fd49fb09f8dfd97c028d675a5d
-
Cropping Square Video using FFmpeg
27 juin 2014, par zorucUpdated
So I am trying to decode a mp4 file, crop the video into a square, and then re encode it back out to another mp4 file. This is my current code but there are a few issues with it.
One is that the video doesn’t keep its rotation after the video has been re encoded
Second is that the frames get outputted in a very fast video file that is not the same length as the original
Third is that there is no sound
Lastly and most importantly is do I need AVFilter to do the frame cropping or can it just be done per frame as a resize of the frame and then encoded back out.
const char *inputPath = "test.mp4";
const char *outPath = "cropped.mp4";
const char *outFileType = "mp4";
static AVFrame *oframe = NULL;
static AVFilterGraph *filterGraph = NULL;
static AVFilterContext *crop_ctx = NULL;
static AVFilterContext *buffersink_ctx = NULL;
static AVFilterContext *buffer_ctx = NULL;
int err;
int crop_video(int width, int height) {
av_register_all();
avcodec_register_all();
avfilter_register_all();
AVFormatContext *inCtx = NULL;
// open input file
err = avformat_open_input(&inCtx, inputPath, NULL, NULL);
if (err < 0) {
printf("error at open input in\n");
return err;
}
// get input file stream info
err = avformat_find_stream_info(inCtx, NULL);
if (err < 0) {
printf("error at find stream info\n");
return err;
}
// get info about video
av_dump_format(inCtx, 0, inputPath, 0);
// find video input stream
int vs = -1;
int s;
for (s = 0; s < inCtx->nb_streams; ++s) {
if (inCtx->streams[s] && inCtx->streams[s]->codec && inCtx->streams[s]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
vs = s;
break;
}
}
// check if video stream is valid
if (vs == -1) {
printf("error at open video stream\n");
return -1;
}
// set output format
AVOutputFormat * outFmt = av_guess_format(outFileType, NULL, NULL);
if (!outFmt) {
printf("error at output format\n");
return -1;
}
// get an output context to write to
AVFormatContext *outCtx = NULL;
err = avformat_alloc_output_context2(&outCtx, outFmt, NULL, NULL);
if (err < 0 || !outCtx) {
printf("error at output context\n");
return err;
}
// input and output stream
AVStream *outStrm = avformat_new_stream(outCtx, NULL);
AVStream *inStrm = inCtx->streams[vs];
// add a new codec for the output stream
AVCodec *codec = NULL;
avcodec_get_context_defaults3(outStrm->codec, codec);
outStrm->codec->thread_count = 1;
outStrm->codec->coder_type = AVMEDIA_TYPE_VIDEO;
if(outCtx->oformat->flags & AVFMT_GLOBALHEADER) {
outStrm->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
outStrm->codec->sample_aspect_ratio = outStrm->sample_aspect_ratio = inStrm->sample_aspect_ratio;
err = avio_open(&outCtx->pb, outPath, AVIO_FLAG_WRITE);
if (err < 0) {
printf("error at opening outpath\n");
return err;
}
outStrm->disposition = inStrm->disposition;
outStrm->codec->bits_per_raw_sample = inStrm->codec->bits_per_raw_sample;
outStrm->codec->chroma_sample_location = inStrm->codec->chroma_sample_location;
outStrm->codec->codec_id = inStrm->codec->codec_id;
outStrm->codec->codec_type = inStrm->codec->codec_type;
if (!outStrm->codec->codec_tag) {
if (! outCtx->oformat->codec_tag
|| av_codec_get_id (outCtx->oformat->codec_tag, inStrm->codec->codec_tag) == outStrm->codec->codec_id
|| av_codec_get_tag(outCtx->oformat->codec_tag, inStrm->codec->codec_id) <= 0) {
outStrm->codec->codec_tag = inStrm->codec->codec_tag;
}
}
outStrm->codec->bit_rate = inStrm->codec->bit_rate;
outStrm->codec->rc_max_rate = inStrm->codec->rc_max_rate;
outStrm->codec->rc_buffer_size = inStrm->codec->rc_buffer_size;
const size_t extra_size_alloc = (inStrm->codec->extradata_size > 0) ?
(inStrm->codec->extradata_size + FF_INPUT_BUFFER_PADDING_SIZE) :
0;
if (extra_size_alloc) {
outStrm->codec->extradata = (uint8_t*)av_mallocz(extra_size_alloc);
memcpy( outStrm->codec->extradata, inStrm->codec->extradata, inStrm->codec->extradata_size);
}
outStrm->codec->extradata_size = inStrm->codec->extradata_size;
AVRational input_time_base = inStrm->time_base;
AVRational frameRate = {25, 1};
if (inStrm->r_frame_rate.num && inStrm->r_frame_rate.den
&& (1.0 * inStrm->r_frame_rate.num / inStrm->r_frame_rate.den < 1000.0)) {
frameRate.num = inStrm->r_frame_rate.num;
frameRate.den = inStrm->r_frame_rate.den;
}
outStrm->r_frame_rate = frameRate;
outStrm->codec->time_base = inStrm->codec->time_base;
outStrm->codec->pix_fmt = inStrm->codec->pix_fmt;
outStrm->codec->width = width;
outStrm->codec->height = height;
outStrm->codec->has_b_frames = inStrm->codec->has_b_frames;
if (!outStrm->codec->sample_aspect_ratio.num) {
AVRational r0 = {0, 1};
outStrm->codec->sample_aspect_ratio =
outStrm->sample_aspect_ratio =
inStrm->sample_aspect_ratio.num ? inStrm->sample_aspect_ratio :
inStrm->codec->sample_aspect_ratio.num ?
inStrm->codec->sample_aspect_ratio : r0;
}
avformat_write_header(outCtx, NULL);
filterGraph = avfilter_graph_alloc();
if (!filterGraph) {
printf("could not open filter graph");
return -1;
}
AVFilter *crop = avfilter_get_by_name("crop");
AVFilter *buffer = avfilter_get_by_name("buffer");
AVFilter *buffersink = avfilter_get_by_name("buffersink");
char args[512];
snprintf(args, sizeof(args), "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
width, height, inStrm->codec->pix_fmt,
inStrm->codec->time_base.num, inStrm->codec->time_base.den,
inStrm->codec->sample_aspect_ratio.num, inStrm->codec->sample_aspect_ratio.den);
err = avfilter_graph_create_filter(&buffer_ctx, buffer, NULL, args, NULL, filterGraph);
if (err < 0) {
printf("error initializing buffer filter\n");
return err;
}
err = avfilter_graph_create_filter(&buffersink_ctx, buffersink, NULL, NULL, NULL, filterGraph);
if (err < 0) {
printf("unable to create buffersink filter\n");
return err;
}
snprintf(args, sizeof(args), "%d:%d", width, height);
err = avfilter_graph_create_filter(&crop_ctx, crop, NULL, args, NULL, filterGraph);
if (err < 0) {
printf("error initializing crop filter\n");
return err;
}
err = avfilter_link(buffer_ctx, 0, crop_ctx, 0);
if (err < 0) {
printf("error linking filters\n");
return err;
}
err = avfilter_link(crop_ctx, 0, buffersink_ctx, 0);
if (err < 0) {
printf("error linking filters\n");
return err;
}
err = avfilter_graph_config(filterGraph, NULL);
if (err < 0) {
printf("error configuring the filter graph\n");
return err;
}
printf("filtergraph configured\n");
for (;;) {
AVPacket packet = {0};
av_init_packet(&packet);
err = AVERROR(EAGAIN);
while (AVERROR(EAGAIN) == err)
err = av_read_frame(inCtx, &packet);
if (err < 0) {
if (AVERROR_EOF != err && AVERROR(EIO) != err) {
printf("eof error\n");
return 1;
} else {
break;
}
}
if (packet.stream_index == vs) {
//
// AVPacket pkt_temp_;
// memset(&pkt_temp_, 0, sizeof(pkt_temp_));
// AVPacket *pkt_temp = &pkt_temp_;
//
// *pkt_temp = packet;
//
// int error, got_frame;
// int new_packet = 1;
//
// error = avcodec_decode_video2(inStrm->codec, frame, &got_frame, pkt_temp);
// if(error < 0) {
// LOGE("error %d", error);
// }
//
// // if (error >= 0) {
//
// // push the video data from decoded frame into the filtergraph
// int err = av_buffersrc_write_frame(buffer_ctx, frame);
// if (err < 0) {
// LOGE("error writing frame to buffersrc");
// return -1;
// }
// // pull filtered video from the filtergraph
// for (;;) {
// int err = av_buffersink_get_frame(buffersink_ctx, oframe);
// if (err == AVERROR_EOF || err == AVERROR(EAGAIN))
// break;
// if (err < 0) {
// LOGE("error reading buffer from buffersink");
// return -1;
// }
// }
//
// LOGI("output frame");
err = av_interleaved_write_frame(outCtx, &packet);
if (err < 0) {
printf("error at write frame");
return -1;
}
//}
}
av_free_packet(&packet);
}
av_write_trailer(outCtx);
if (!(outCtx->oformat->flags & AVFMT_NOFILE) && outCtx->pb)
avio_close(outCtx->pb);
avformat_free_context(outCtx);
avformat_close_input(&inCtx);
return 0;}
-
lavf : switch to AVStream.time_base as the hint for the muxer timebase
18 mai 2014, par Anton Khirnovlavf : switch to AVStream.time_base as the hint for the muxer timebase
Previously, AVStream.codec.time_base was used for that purpose, which
was quite confusing for the callers. This change also opens the path for
removing AVStream.codec.The change in the lavf-mkv test is due to the native timebase (1/1000)
being used instead of the default one (1/90000), so the packets are now
sent to the crc muxer in the same order in which they are demuxed
(previously some of them got reordered because of inexact timestamp
conversion).- [DH] doc/APIchanges
- [DH] libavformat/avformat.h
- [DH] libavformat/avienc.c
- [DH] libavformat/filmstripenc.c
- [DH] libavformat/framehash.c
- [DH] libavformat/movenc.c
- [DH] libavformat/mpegtsenc.c
- [DH] libavformat/mux.c
- [DH] libavformat/mxfenc.c
- [DH] libavformat/oggenc.c
- [DH] libavformat/riffenc.c
- [DH] libavformat/rmenc.c
- [DH] libavformat/swf.h
- [DH] libavformat/swfenc.c
- [DH] libavformat/utils.c
- [DH] libavformat/version.h
- [DH] libavformat/yuv4mpegenc.c
- [DH] tests/ref/lavf/mkv