
Recherche avancée
Médias (1)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
Autres articles (75)
-
Organiser par catégorie
17 mai 2013, parDans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...) -
Récupération d’informations sur le site maître à l’installation d’une instance
26 novembre 2010, parUtilité
Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;
Sur d’autres sites (6213)
-
Modifying motion vectors in ffmpeg H.264 decoder
14 février 2012, par qontranamiFor research purposes, I am trying to modify H.264 motion vectors (MVs) for each P- and B-frame prior to motion compensation during the decoding process. I am using FFmpeg for this purpose. An example of a modification is replacing each MV with its original spatial neighbors and then using the resultant MVs for motion compensation, rather than the original ones. Please direct me appropriately.
So far, I have been able to do a simple modification of MVs in the file /libavcodec/h264_cavlc.c. In the function, ff_h264_decode_mb_cavlc(), modifying the mx and my variables, for instance, by increasing their values modifies the MVs used during decoding.
For example, as shown below, the mx and my values are increased by 50, thus lengthening the MVs used in the decoder.
mx += get_se_golomb(&s->gb)+50;
my += get_se_golomb(&s->gb)+50;However, in this regard, I don't know how to access the neighbors of mx and my for my spatial mean analysis that I mentioned in the first paragraph. I believe that the key to doing so lies in manipulating the array, mv_cache.
Another experiment that I performed was in the file, libavcodec/error_resilience.c. Based on the guess_mv() function, I created a new function, mean_mv() that is executed in ff_er_frame_end() within the first if-statement. That first if-statement exits the function ff_er_frame_end() if one of the conditions is a zero error-count (s->error_count == 0). However, I decided to insert my mean_mv() function at this point so that is always executed when there is a zero error-count. This experiment somewhat yielded the results I wanted as I could start seeing artifacts in the top portions of the video but they were restricted just to the upper-right corner. I'm guessing that my inserted function is not being completed so as to meet playback deadlines or something.
Below is the modified if-statement. The only addition is my function, mean_mv(s).
if(!s->error_recognition || s->error_count==0 || s->avctx->lowres ||
s->avctx->hwaccel ||
s->avctx->codec->capabilities&CODEC_CAP_HWACCEL_VDPAU ||
s->picture_structure != PICT_FRAME || // we dont support ER of field pictures yet, though it should not crash if enabled
s->error_count==3*s->mb_width*(s->avctx->skip_top + s->avctx->skip_bottom)) {
//av_log(s->avctx, AV_LOG_DEBUG, "ff_er_frame_end in er.c\n"); //KG
if(s->pict_type==AV_PICTURE_TYPE_P)
mean_mv(s);
return;And here's the mean_mv() function I created based on guess_mv().
static void mean_mv(MpegEncContext *s){
//uint8_t fixed[s->mb_stride * s->mb_height];
//const int mb_stride = s->mb_stride;
const int mb_width = s->mb_width;
const int mb_height= s->mb_height;
int mb_x, mb_y, mot_step, mot_stride;
//av_log(s->avctx, AV_LOG_DEBUG, "mean_mv\n"); //KG
set_mv_strides(s, &mot_step, &mot_stride);
for(mb_y=0; mb_ymb_height; mb_y++){
for(mb_x=0; mb_xmb_width; mb_x++){
const int mb_xy= mb_x + mb_y*s->mb_stride;
const int mot_index= (mb_x + mb_y*mot_stride) * mot_step;
int mv_predictor[4][2]={{0}};
int ref[4]={0};
int pred_count=0;
int m, n;
if(IS_INTRA(s->current_picture.f.mb_type[mb_xy])) continue;
//if(!(s->error_status_table[mb_xy]&MV_ERROR)){
//if (1){
if(mb_x>0){
mv_predictor[pred_count][0]= s->current_picture.f.motion_val[0][mot_index - mot_step][0];
mv_predictor[pred_count][1]= s->current_picture.f.motion_val[0][mot_index - mot_step][1];
ref [pred_count] = s->current_picture.f.ref_index[0][4*(mb_xy-1)];
pred_count++;
}
if(mb_x+1current_picture.f.motion_val[0][mot_index + mot_step][0];
mv_predictor[pred_count][1]= s->current_picture.f.motion_val[0][mot_index + mot_step][1];
ref [pred_count] = s->current_picture.f.ref_index[0][4*(mb_xy+1)];
pred_count++;
}
if(mb_y>0){
mv_predictor[pred_count][0]= s->current_picture.f.motion_val[0][mot_index - mot_stride*mot_step][0];
mv_predictor[pred_count][1]= s->current_picture.f.motion_val[0][mot_index - mot_stride*mot_step][1];
ref [pred_count] = s->current_picture.f.ref_index[0][4*(mb_xy-s->mb_stride)];
pred_count++;
}
if(mb_y+1current_picture.f.motion_val[0][mot_index + mot_stride*mot_step][0];
mv_predictor[pred_count][1]= s->current_picture.f.motion_val[0][mot_index + mot_stride*mot_step][1];
ref [pred_count] = s->current_picture.f.ref_index[0][4*(mb_xy>mb_stride)];
pred_count++;
}
if(pred_count==0) continue;
if(pred_count>=1){
int sum_x=0, sum_y=0, sum_r=0;
int k;
for(k=0; k/ Sum all the MVx from MVs avail. for EC
sum_y+= mv_predictor[k][1]; // Sum all the MVy from MVs avail. for EC
sum_r+= ref[k];
// if(k && ref[k] != ref[k-1])
// goto skip_mean_and_median;
}
mv_predictor[pred_count][0] = sum_x/k;
mv_predictor[pred_count][1] = sum_y/k;
ref [pred_count] = sum_r/k;
}
s->mv[0][0][0] = mv_predictor[pred_count][0];
s->mv[0][0][1] = mv_predictor[pred_count][1];
for(m=0; mcurrent_picture.f.motion_val[0][mot_index + m + n * mot_stride][0] = s->mv[0][0][0];
s->current_picture.f.motion_val[0][mot_index + m + n * mot_stride][1] = s->mv[0][0][1];
}
}
decode_mb(s, ref[pred_count]);
//}
}
}
}I would really appreciate some assistance on how to go about this properly.
-
Android : How to play MP3 files with ID3 tags using FFMPEG
16 juillet 2012, par Vipul PurohitI am trying to play various sound files using FFMPEG lib in android. I have managed to play most files but some file
Example : MP3 files with ID3 tag
are not playing. I tried playing files by taking reference of various example but didn't get any success.Here is my native side code :
#include
#include
#include
#include <android></android>log.h>
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#define LOG_TAG "mylib"
#define LOGI(...) __android_log_print(ANDROID_LOG_INFO, LOG_TAG, __VA_ARGS__)
#define LOGE(...) __android_log_print(ANDROID_LOG_ERROR, LOG_TAG, __VA_ARGS__)
#define AUDIO_INBUF_SIZE 20480
#define AUDIO_REFILL_THRESH 4096
void Java_ru_dzakhov_ffmpeg_test_MainActivity_createEngine(JNIEnv* env, jclass clazz)
{
avcodec_init();
av_register_all();
}
jstring Java_ru_dzakhov_ffmpeg_test_MainActivity_loadFile(JNIEnv* env, jobject obj,jstring file,jbyteArray array)
{
{
jboolean isfilenameCopy;
const char * filename = (*env)->GetStringUTFChars(env, file, &isfilenameCopy);
int audioStreamIndex;
AVCodec *codec;
AVCodecContext *c= NULL;
AVFormatContext * pFormatCtx;
AVCodecContext * aCodecCtx;
int out_size, len, audioStream=-1, i, err;
FILE *f, *outfile;
uint8_t *outbuf;
uint8_t inbuf[AUDIO_INBUF_SIZE + FF_INPUT_BUFFER_PADDING_SIZE];
AVPacket avpkt;
jclass cls = (*env)->GetObjectClass(env, obj);
jmethodID play = (*env)->GetMethodID(env, cls, "playSound", "([BI)V");//At the begining of your main function
LOGE("source file name is %s", filename);
avcodec_init();
av_register_all();
LOGE("Stage 1");
/* get format somthing of source file to AVFormatContext */
int lError;
if ((lError = av_open_input_file(&pFormatCtx, filename, NULL, 0, NULL)) !=0 ) {
LOGE("Error open source file: %d", lError);
exit(1);
}
if ((lError = av_find_stream_info(pFormatCtx)) < 0) {
LOGE("Error find stream information: %d", lError);
exit(1);
}
LOGE("Stage 1.5");
LOGE("audio format: %s", pFormatCtx->iformat->name);
LOGE("audio bitrate: %d", pFormatCtx->bit_rate);
audioStreamIndex = av_find_best_stream(pFormatCtx, AVMEDIA_TYPE_AUDIO, -1, -1, &codec, 0);
// audioStreamIndex=-1;
// LOGE("nb_stream %d", pFormatCtx->nb_streams);
// for(i=0; inb_streams; i++)
// {
// LOGE("nb_stream %d", pFormatCtx->nb_streams);
// if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO)
// {
// audioStream=i;
// break;
// }
// }
//
// if(audioStream==-1)
// {
// LOGE("Cannot find audio");
// }
LOGE("audioStreamIndex %d", audioStreamIndex);
// if (audioStreamIndex == AVERROR_STREAM_NOT_FOUND) {
// LOGE(1, "cannot find a audio stream");
// exit(1);
// } else if (audioStreamIndex == AVERROR_DECODER_NOT_FOUND) {
// LOGE(1, "audio stream found, but no decoder is found!");
// exit(1);
// }
LOGE("audio codec: %s", codec->name);
/* get codec somthing of audio stream to AVCodecContext */
aCodecCtx = pFormatCtx->streams[audioStreamIndex]->codec;
if (avcodec_open(aCodecCtx, codec) < 0) {
LOGE("cannot open the audio codec!");
exit(1);
}
printf("Audio decoding\n");
LOGE("Stage 1.7");
LOGE("S");
codec = avcodec_find_decoder(aCodecCtx->codec_id);
LOGE("Stage 1.8");
if (!codec) {
LOGE("codec not found\n");
exit(1);
}
LOGE("Stage 2");
// c= avcodec_alloc_context();
LOGE("Stage 3");
/* open it */
if (avcodec_open(aCodecCtx, codec) < 0) {
LOGE("could upper");
fprintf(stderr, "could not open codec\n");
LOGE("could not open codec");
}
LOGE("Stage 4");
outbuf = malloc(AVCODEC_MAX_AUDIO_FRAME_SIZE);
f = fopen(filename, "rb");
if (!f) {
fprintf(stderr, "could not open %s\n", filename);
LOGE("could not open");
exit(1);
}
/* decode until eof */
avpkt.data = inbuf;
avpkt.size = fread(inbuf, 1, AUDIO_INBUF_SIZE, f);
LOGE("Stage 5");
while (avpkt.size > 0) {
// LOGE("Stage 6");
out_size = (AVCODEC_MAX_AUDIO_FRAME_SIZE/3)*2;
len = avcodec_decode_audio3(aCodecCtx, (int16_t *)outbuf, &out_size, &avpkt);
LOGE("data_size %d len %d", out_size, len);
if (len < 0) {
fprintf(stderr, "Error while decoding\n");
LOGE("DECODING ERROR");
LOGE("DECODING ERROR %d", len);
exit(1);
}
// LOGE("Stage 7");
if (out_size > 0) {
/* if a frame has been decoded, output it */
// LOGE("Stage 8");
jbyte *bytes = (*env)->GetByteArrayElements(env, array, NULL);
memcpy(bytes, outbuf, out_size); //
(*env)->ReleaseByteArrayElements(env, array, bytes, 0);
(*env)->CallVoidMethod(env, obj, play, array, out_size);
// LOGE("DECODING ERROR5");
}
LOGE("Stage 9");
avpkt.size -= len;
avpkt.data += len;
if (avpkt.size < AUDIO_REFILL_THRESH) {
/* Refill the input buffer, to avoid trying to decode
* incomplete frames. Instead of this, one could also use
* a parser, or use a proper container format through
* libavformat. */
memmove(inbuf, avpkt.data, avpkt.size);
avpkt.data = inbuf;
len = fread(avpkt.data + avpkt.size, 1,
AUDIO_INBUF_SIZE - avpkt.size, f);
if (len > 0)
avpkt.size += len;
}
}
LOGE("Stage 12");
fclose(f);
free(outbuf);
avcodec_close(c);
av_free(c);
}
}Sorry for a rough code but I am testing on this code.
I found the reason that the mp3 file has ID3 info, but I don't handle this, all the ID3 info data is decode as frame data, so the right frame header sign 0xFFE00000 can't be found and the method "avcodec_decode_audio3" return -1.
I found this in question mp3 decoding using ffmpeg API (Header missing)
If anyone can help me handle and play file with ID3 tags in MP3 and similar for other formats.
-
How to use FFMpeg -timestamp syntax
6 janvier 2015, par reinhard.leeHi, All !
ffMpeg -timstamp
option works likes upper image ?
07:21:54 07/07/05 white text in black box container.in ubuntu 12.04
typed the excute like this.ffmpeg -y -f video4linux2 -s vga -r 30 -fs 1M -i /dev/video0 -timestamp now -copyts ./USB1_Test_vga.mp4
but it doesn’t work.
is there any other option for display video recorded time ?