Recherche avancée

Médias (1)

Mot : - Tags -/artwork

Autres articles (74)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

Sur d’autres sites (5519)

  • Adding A New System To The Game Music Website

    1er août 2012, par Multimedia Mike — General

    At first, I was planning to just make a little website where users could install a Chrome browser extension and play music from old 8-bit NES games. But, like many software projects, the goal sort of ballooned. I created a website where users can easily play old video game music. It doesn’t cover too many systems yet, but I have had individual requests to add just about every system you can think of.

    The craziest part is that I know it’s possible to represent most of the systems. Eventually, it would be great to reach Chipamp parity (a combination plugin for Winamp that packages together plugins for many of these chiptunes). But there is a process to all of this. I have taken to defining a number of phases that are required to get a new system covered.

    Phase 0 informally involves marveling at the obscurity of some of the console systems for which chiptune collections have evolved. WonderSwan ? Sharp X68000 ? PC-88 ? I may be viewing this through a terribly Ameri-centric lens. I’ve at least heard of the ZX Spectrum and the Amstrad CPC even if I’ve never seen either.

    No matter. The goal is to get all their chiptunes cataloged and playable.

    Phase 1 : Finding A Player
    The first step is to find a bit of open source code that can play a particular format. If it’s a library that can handle many formats, like Game Music Emu or Audio Overload SDK, even better (probably). The specific open source license isn’t a big concern for me. I’m almost certain that some of the libraries that SaltyGME currently mixes are somehow incompatible, license-wise. I’ll worry about it when I encounter someone who A) cares, and B) is in a position to do something about it. Historical preservation comes first, and these software libraries aren’t getting any younger (I’m finding some that haven’t been touched in a decade).

    Phase 2 : Test Program
    The next phase is to create a basic test bench program that sends a music file into the library, generates a buffer of audio, and shoves it out to the speakers via PulseAudio’s simple API (people like to rip on PulseAudio, but its simple API really lives up to its name and requires pages less boilerplate code to play a few samples than ALSA).

    Phase 3 : Plug Into Web Player
    After successfully creating the test bench and understanding exactly which source files need to be built, the next phase is to hook it up to the main SaltyGME program via the ad-hoc plugin API I developed. This API requires that a player backend can, at the very least, initialize itself based on a buffer of bytes and generate audio samples into an array of 16-bit numbers. The API also provides functions for managing files with multiple tracks and toggling individual voices/channels if the library supports such a feature. Having the test bench application written beforehand usually smooths out this step.

    But really, I’m just getting started.

    Phase 4 : Collecting A Song Corpus
    Then there is the matter of staging a collection of songs for a given system. It seems like it would just be a matter of finding a large collection of songs for a given format, downloading them in bulk, and mirroring them. Honestly, that’s the easy part. People who are interested in this stuff have been lovingly curating massive collections of these songs for years (see SNESmusic.org for one of the best examples, and they also host a torrent of all their music for really quick and easy hoarding).

    In my drive to make this game music website more useful for normal people, the goal is to extract as much metadata as possible to make searching better, and to package the data so that it’s as convenient as possible for users. Whenever I seek to add a new format to the collection, this is the phase where I invariably find that I have to fundamentally modify some of the assumptions I originally made in the player.

    First, there were the NES Sound Format (NSF) files, the original format I wanted to play. These are files that have any number of songs packed into a single file. Playback libraries expose APIs to jump to individual tracks. So the player was designed around that. Game Boy GBS files also fall into this category but present a different challenge vis-à-vis metadata, addressed in the next phase.

    Then, there were the SPC files. Each SPC file is its own song and multiple SPC files are commonly bundled as RAR files. Not wanting to deal with RAR, or any format where I interacted with a general compression API to pull a few files out, I created a custom resource format (inspired by so many I have studied and documented) and compressed it with a simpler compression API. I also had to modify some of the player’s assumptions to deal with this archive format. Genesis VGMs, bundled either in .zip or .7z, followed the same model as SPC in RAR.

    Then it was suggested that I attempt to bring SaltyGME closer to feature parity with Chipamp, rather than just being a Chrome browser frontend for Game Music Emu. When I studied the Portable Sound Format (PSF), I realized it didn’t fit into the player model I already had. PSF uses a sort of shared library model for code execution and I developed another resource archive format to cope with it. So that covers quite a few formats.

    One more architecture challenge arose when I started to study one of the prevailing metadata formats, explained in the next phase.

    Phase 5 : Metadata
    Finally, for the collections to really be useful, I need to harvest that juicy metadata for search and presentation.

    I have created a series of programs and scripts to scrape metadata out of these music files and store it all in a database that drives the website and search engine. I recognize that it’s no good to have a large corpus of songs with minimal metadata and while importing bulk quantities of music, the scripts harshly reject songs that have too little metadata.

    Again, challenges abound. One of the biggest challenges I’m facing is the peculiar quasi-freeform metadata format that emerged as .m3u that takes a form similar to :

    #################################################################
    #
    # GRADIUS2
    # (c) KONAMI  by Furukawa Motoaki, IKACHAN
    #
    #################################################################
    

    nemesis2.kss::KSS,62,[Nemesis2] (Opening),2:23,,0
    nemesis2.kss::KSS,61,[Nemesis2] (Start),7,,0
    nemesis2.kss::KSS,43,[Nemesis2] (Air Battle),34,0-
    nemesis2.kss::KSS,44,[Nemesis2] (1st. BGM),51,0-
    [...]

    A lot of file formats (including Game Boy GBS mentioned earlier) store their metadata separately using this format. I have some ideas about tools I can use to help me process this data but I’m pretty sure each one will require some manual intervention.

    As alluded to in phase 4, .m3u presents another architectural challenge : Notice the second field in the CSV .m3u data. That’s a track number. A player can’t expect every track in a bundled chiptune file to be valid, nor to be in any particular order. Thus, I needed to alter the architecture once more to take this into account. However, instead of modifying the SaltyGME player, I simply extended the metadata database to include a playback order which, by default, is the same as the track order but can also accommodate this new issue. This also has the bonus of providing a facility to exclude playback of certain tracks. This comes in handy for many PSF archives which tend to include files that only provide support for other files and aren’t meant to be played on their own.

    Bright Side
    The reward for all of this effort is that the data lands in a proper database in the end. None of it goes back into the chiptune files themselves. This makes further modification easier as all of the data that is indexed and presented on the site comes from the database. Somewhere down the road, I should probably create an API for accessing this metadata.

  • Android AudioRecord to FFMPEG encode native AAC

    8 mars 2013, par Curtis Kiu

    I am doing video chatting in android and i would like to port ffmpeg to stream rtsp or rtmp but now i have a try in RTSP first.
    Somehow the problem now is av_write_frame or av_interleaved_write_frame is fail to work or just crash.
    Maybe...
    AudioRecord Sample format is not equals to FFMPEG setting
    Frame receive is not equals

    So code... AudioRecorder
    http://pastebin.com/iWtB3Jhy
    package com.curtis.broadcaster.Publisher ;

    import android.app.Activity;
    import android.graphics.Bitmap;
    import android.media.AudioFormat;
    import android.media.AudioRecord;
    import android.media.AudioRecord.OnRecordPositionUpdateListener;
    import android.media.MediaRecorder;
    import android.os.Bundle;
    import android.util.Log;

    public class Publisher extends Activity {
       private int mAudioBufferSize;
       private int mAudioBufferSampleSize;
       private AudioRecord mAudioRecord;
       private boolean inRecordMode = false;
       private short[] audioBuffer;
       private String Tag = "Publisher/Publisher.java";

       public void onCreate(Bundle savedInstanceState) {
           Log.i(Tag, "|| onCreate()");
           super.onCreate(savedInstanceState);
           initAudioRecord();
           Log.i(Tag, "-- End onCreate()");
       }

       @Override
       public void onResume() {
           Log.i(Tag, "|| onResume()");
           super.onResume();
           inRecordMode = true;
           Thread t = new Thread(new Runnable() {

               public void run() {
                   Log.i(Tag, "|| Run Threat t");
                   getSamples();
                   Log.i(Tag, "-- End Threat t");
               }
           });
           t.start();
           Log.i(Tag, "-- End onResume()");
       }

       protected void onPause() {
           Log.i(Tag, "|| Run onPause()");
           inRecordMode = false;
           super.onPause();
           Log.i(Tag, "-- End onPause()");
       }

       @Override
       protected void onDestroy() {
           Log.i(Tag, "|| Run onDestroy()");
           if (mAudioRecord != null) {
               mAudioRecord.release();
               Log.i(Tag + " onDestroy", "mAudioRecord.release()");
           }
           jniStopAll();
           super.onDestroy();
           android.os.Process.killProcess(android.os.Process.myPid());
           Log.i(Tag, "-- End onDestroy()");
       }

       public OnRecordPositionUpdateListener mListener = new OnRecordPositionUpdateListener() {

           public void onPeriodicNotification(AudioRecord recorder) {
               Log.i(Tag + " mListener(onPeriodicNotification)", "time is "
                       + System.currentTimeMillis());
               jniSetAudioSample(audioBuffer);
           //  audioBuffer = new short[mAudioBufferSampleSize];
           }

           public void onMarkerReached(AudioRecord recorder) {
               Log.i(Tag + " mListener(onMarkerReached)",
                       "time is " + System.currentTimeMillis());
               inRecordMode = false;
               recorder.stop();
               Log.i(Tag, "recorder.stop()");
           }
       };

       private void initAudioRecord() {
           try {
               jniCheck();
               int sampleRate = 44100;
               int channelConfig = AudioFormat.CHANNEL_IN_MONO;
               int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
               mAudioBufferSize = 2 * AudioRecord.getMinBufferSize(sampleRate,
                       channelConfig, audioFormat);
               mAudioBufferSampleSize = mAudioBufferSize / 2;
               Log.i(Tag, "Buffer Size " + mAudioBufferSize);
               Log.i(Tag, "new AudioRecord begin");

               mAudioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC,
                       sampleRate, channelConfig, audioFormat, mAudioBufferSize);
               Log.i(Tag, "new AudioRecord end");

               jniInitFFMpeg();
           } catch (IllegalArgumentException e) {
               Log.i(Tag, "initAudioRecord go Errors");
               e.printStackTrace();
           }

           // mAudioRecord.setNotificationMarkerPosition(10000);
           mAudioRecord.setPositionNotificationPeriod(1024);
           mAudioRecord.setRecordPositionUpdateListener(mListener);

           int audioRecordState = mAudioRecord.getState();
           if (audioRecordState != AudioRecord.STATE_INITIALIZED) {
               finish();
           }

       }

       private void getSamples() {
           Log.i(Tag, "|| getSamples()");
           if (mAudioRecord == null)
               return;

           audioBuffer = new short[mAudioBufferSampleSize];
           mAudioRecord.startRecording();
           int audioRecordingState = mAudioRecord.getRecordingState();
           if (audioRecordingState != AudioRecord.RECORDSTATE_RECORDING) {
               finish();
           }
           while (inRecordMode) {
               int samplesRead = mAudioRecord.read(audioBuffer, 0,
                       mAudioBufferSampleSize);
               Log.i(Tag, "getSamples >>SamplesRead : " + samplesRead);
           }
           mAudioRecord.stop();
           Log.i(Tag, "mAudioRecord.stop()");
       }

       private native void jniCheck();

       private native void jniInitFFMpeg();

       private native void jniSetAudioSample(short[] audioBuffer);

       private native void jniStopAll();

       static {
           System.loadLibrary("ffmpeg");
           System.loadLibrary("testerv4");

       }

    }

    FFMPEG JNI http://pastebin.com/hgPva35b

    #include
    #include <android></android>log.h>
    #include <android></android>bitmap.h>

    #include
    #include
    #include
    #include
    #include <sys></sys>time.h>
    #include "libavformat/rtsp.h"

    #include <libavutil></libavutil>mathematics.h>
    #include <libavformat></libavformat>avformat.h>
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libswscale></libswscale>swscale.h>

    #undef exit
    /* Log System */
    #define  LOG_TAG    "FFMPEGSample - v4a"
    #define DEBUG_TAG   "FFMPEG-AUDIO PART"
    #define  LOGI(...)  __android_log_print(ANDROID_LOG_INFO,LOG_TAG,__VA_ARGS__)
    #define  LOGE(...)  __android_log_print(ANDROID_LOG_ERROR,LOG_TAG,__VA_ARGS__)

    /* 5 seconds stream duration */
    #define STREAM_DURATION   5.0
    #define STREAM_FRAME_RATE 25 /* 25 images/s */
    #define STREAM_NB_FRAMES  ((int)(STREAM_DURATION * STREAM_FRAME_RATE))
    #define STREAM_PIX_FMT      PIX_FMT_YUV420P /* default pix_fmt */
    #define VIDEO_CODEC_ID      CODEC_ID_FLV1
    #define AUDIO_CODEC_ID      CODEC_ID_AAC

    static int sws_flags = SWS_BICUBIC;
    int mode = 1; //1 = only audio, 2 = only video, 3 = both video and audio

    AVFormatContext *avForCtx;
    //AVFormatContext *oc;
    AVStream *audio_st, *video_st;
    double audio_pts, video_pts;
    int frameCount, audioFrameCount, start;
    char *url;

    /*Audio Declare*/
    float t, tincr, tincr2;
    int16_t *samples;
    uint8_t *audio_outbuf;
    int audio_outbuf_size;
    int audio_input_frame_size;

    AVFormatContext *createAVFormatContext();
    AVStream *add_audio_stream(AVFormatContext *oc, enum CodecID codec_id);
    void open_video(AVFormatContext *oc, AVStream *st);
    void open_audio(AVFormatContext *oc, AVStream *st);
    AVStream *add_video_stream(AVFormatContext *oc, enum CodecID codec_id);
    void write_audio_frame(AVFormatContext *oc, AVStream *st);
    void write_video_frame(AVFormatContext *oc, AVStream *st);
    void init();
    void setAudioSample(unsigned char *inSample[]);
    void stopAll();

    /*/////////////////////////////////JNI Bridge////////////////////////////////////// */
    void Java_com_curtis_broadcaster_Publisher_Publisher_jniCheck(JNIEnv* env,
           jobject this) {
       LOGI("-@ JNI work fine @-");
    }
    void Java_com_curtis_broadcaster_Publisher_Publisher_jniInitFFMpeg(JNIEnv* env,
           jobject this) {
       LOGI("-@ Init Encorder @-");

       /* initialize libavcodec, and register all codecs and formats */
       avcodec_init();
       avcodec_register_all();
       av_register_all();
       avformat_network_init(); //ERROR


       /* allocate the output media context */
       avForCtx = createAVFormatContext();
       frameCount = 1;
       audioFrameCount = 1;
       start = 0;

       /* add the audio and video streams using the default format codecs
        and initialize the codecs */
       video_st = NULL;
       audio_st = NULL;
       if (mode == 1 || mode == 3) {
           audio_st = add_audio_stream(avForCtx, AUDIO_CODEC_ID);
           LOGI("(Init Encorder) - addAudioStream");
       }
       if (mode == 2 || mode == 3) {
           video_st = add_video_stream(avForCtx, VIDEO_CODEC_ID);
           LOGI("(Init Encorder) - addVideoStream");

       }

       //  av_dump_format(avForCtx, 0, "rtsp://192.168.1.104/live/live", 1);
       LOGI("(Init Encorder) - Waiting to call open_*");

       if (audio_st) {
           open_audio(avForCtx, audio_st);
           LOGI("(Init Encorder) - open_audio");
       }

       if (video_st) {
           open_video(avForCtx, video_st);
           LOGI("(Init Encorder) - open_video");
       }

       av_write_header(avForCtx);
       LOGI("-@ Finish Init Encorder @-");

    }

    void Java_com_curtis_broadcaster_Publisher_Publisher_jniSetAudioSample(
           JNIEnv* env, jobject this, unsigned char *inSample[]) {
       if (audio_st) {
           LOGI("-@ Start setAudioSample @-");
           samples = (int16_t *) inSample;

           write_audio_frame(avForCtx, audio_st);
           LOGI("-@ Finish setAudioSample @-");
       }
    }

    void Java_com_curtis_broadcaster_Publisher_Publisher_jniStopAll(JNIEnv* env,
           jobject this) {
       LOGI("-@ Stopping All @-");
       //close_audio(avForCtx, audio_st);
       //close_video(avForCtx, video_st);
       LOGI("-@ Stopped All @-");
    }
    /*/////////////////////////////END JNI Bridge////////////////////////////////////// */

    /* New Added Coding */
    AVFormatContext *createAVFormatContext() {
       LOGI("-@OPEN - createAVFormatContext@-");

       AVFormatContext *ctx = avformat_alloc_context();
       //  ctx->oformat = av_guess_format("flv", "rtmp://192.168.1.104/live/live",
       //      NULL);
       //  ctx->oformat = av_guess_format("flv", NULL, NULL);

       //if (!av_guess_format("flv", NULL, NULL)) {

       //LOGI("-flv Can not Guess Format-");
       //}

       ctx->oformat = av_guess_format("rtsp", NULL, NULL);

       if (!av_guess_format("rtsp", NULL, NULL)) {

           LOGI("-flv Can not Guess Format-");
       }

       /*
        LOGI("%d",avformat_alloc_output_context2(&amp;ctx, ctx->oformat, "flv",
        "rtmp://192.168.1.104/live/live"));
        if (!ctx) {
        LOGI("-@avformat_alloc_output_context2 fail@-");
        }*/
       //   LOGI("flv %d",avformat_alloc_output_context2(&amp;ctx, ctx->oformat, "flv",
       //   "rtmp://192.168.1.104/live/live"));
       //   LOGI("rtmp %d",avformat_alloc_output_context2(&amp;ctx, ctx->oformat, "rtmp",
       //   "rtmp://192.168.1.104/live/live"));
       //   LOGI("mpeg4 %d",avformat_alloc_output_context2(&amp;ctx, ctx->oformat, "mpeg4",
       //   "rtmp://192.168.1.104/live/live"));
       //   LOGI("NULL %d",avformat_alloc_output_context2(&amp;ctx, ctx->oformat, NULL,
       //   "rtmp://192.168.1.104/live/live"));
       avformat_alloc_output_context2(&amp;ctx, ctx->oformat, "sdp",
               "rtsp://192.168.1.104:1935/live/live");

       if (!ctx) {
           LOGI("-@avformat_alloc_output_context2 fail@-");
       }

       LOGI("-@CLOSE - createAVFormatContext@-");

       return ctx;
    }

    /**************************************************************/
    /* audio output */

    /*
    * add an audio output stream
    */
    AVStream *add_audio_stream(AVFormatContext *oc, enum CodecID codec_id) {
       LOGI("-@OPEN - add_audio_stream@-");

       AVCodecContext *c;
       AVStream *st = avformat_new_stream(oc, avcodec_find_encoder(codec_id));

       if (!st) {
           LOGI("-@add_audio_stream - Could not alloc stream@-");
           exit(1);
       }
       st->id = 1;

       c = st->codec;
       c->codec_id = AUDIO_CODEC_ID;
       c->codec_type = AVMEDIA_TYPE_AUDIO;

       /* put sample parameters */
       c->sample_fmt = AV_SAMPLE_FMT_FLT;
       //c->sample_fmt = AV_SAMPLE_FMT_S16;
       c->bit_rate = 100000;
       c->sample_rate = 44100;
       c->channels = 1;

       // some formats want stream headers to be separate
       if (oc->oformat->flags &amp; AVFMT_GLOBALHEADER)
           c->flags |= CODEC_FLAG_GLOBAL_HEADER;
       LOGI("-@Close - add_audio_stream@-");

       return st;
    }

    void open_audio(AVFormatContext *oc, AVStream *st) {
       LOGI("@- open_audio -@");

       AVCodecContext *c;
       AVCodec *codec;

       c = st->codec;
       c->strict_std_compliance = -2;
       /* find the audio encoder */
       codec = avcodec_find_encoder(c->codec_id);
       if (!codec) {
           LOGI("@- open_audio E:codec not found-@");
           exit(1);
       }

       /* open it */
       if (avcodec_open(c, codec) &lt; 0) {
           LOGI("%d",avcodec_open(c, codec));
           LOGI("@- open_audio E:could not open codec-@");
           exit(1);
       }

       /* init signal generator */
       t = 0;
       tincr = 2 * M_PI * 110.0 / c->sample_rate;
       /* increment frequency by 110 Hz per second */
       tincr2 = 2 * M_PI * 110.0 / c->sample_rate / c->sample_rate;

       audio_outbuf_size = 10000;
       audio_outbuf = av_malloc(audio_outbuf_size);

       /* ugly hack for PCM codecs (will be removed ASAP with new PCM
        support to compute the input frame size in samples */
       if (c->frame_size &lt;= 1) {
           audio_input_frame_size = audio_outbuf_size / c->channels;
           switch (st->codec->codec_id) {
           case CODEC_ID_PCM_S16LE:
           case CODEC_ID_PCM_S16BE:
           case CODEC_ID_PCM_U16LE:
           case CODEC_ID_PCM_U16BE:
               audio_input_frame_size >>= 1;
               break;
           default:
               break;
           }
       } else {
           audio_input_frame_size = c->frame_size;
       }
       LOGI("audio_input_frame_size : %d",audio_input_frame_size);
       samples = av_malloc(audio_input_frame_size * 2 * c->channels);
       LOGI("@- Close open_audio -@");

    }

    /* prepare a 16 bit dummy audio frame of &#39;frame_size&#39; samples and
    &#39;nb_channels&#39; channels */
    void get_audio_frame(int16_t *samples, int frame_size, int nb_channels) {
       LOGI("@- get_audio_frame -@");

       int j, i, v;
       int16_t *q;

       q = samples;
       for (j = 0; j &lt; frame_size; j++) {
           v = (int) (sin(t) * 10000);
           for (i = 0; i &lt; nb_channels; i++)
               *q++ = v;
           t += tincr;
           tincr += tincr2;
           LOGI("@- audio_frame Looping -@");
       }
       LOGI("@- CLOSE get_audio_frame -@");

    }

    void write_audio_frame(AVFormatContext *oc, AVStream *st) {
       LOGI("@- write_audio_frame -@");

       AVCodecContext *c;
       AVPacket pkt;
       av_init_packet(&amp;pkt);

       c = st->codec;

       //get_audio_frame(samples, audio_input_frame_size, c->channels);
       LOGI("@- write_audio_frame : got frame from get_audio_frame -@");

       pkt.size
               = avcodec_encode_audio(c, audio_outbuf, audio_outbuf_size, samples);
       LOGI("%d",pkt.size);

       if (c->coded_frame &amp;&amp; c->coded_frame->pts != AV_NOPTS_VALUE)
           pkt.pts
                   = av_rescale_q(c->coded_frame->pts, c->time_base, st->time_base);
       LOGI("%d",pkt.pts);

       pkt.flags |= AV_PKT_FLAG_KEY;
       pkt.stream_index = st->index;
       pkt.data = audio_outbuf;
       LOGI("Finish PKT");

       /* write the compressed frame in the media file */
       //  if (av_interleaved_write_frame(oc, &amp;pkt) != 0) {
       //  LOGI("@- write_audio_frame E:Error while writing audio frame -@");
       //  exit(1);
       //  }

       if (av_interleaved_write_frame(oc, &amp;pkt) != 0) {
           LOGI("Error while writing audio frame %d\n", audioFrameCount);
       } else {
           LOGI("Writing Audio Frame %d", audioFrameCount);
       }

       LOGI("@- CLOSE write_audio_frame -@");
       audioFrameCount++;
       av_free_packet(&amp;pkt);
    }

    void close_audio(AVFormatContext *oc, AVStream *st) {
       avcodec_close(st->codec);

       av_free(samples);
       av_free(audio_outbuf);
    }

    /**************************************************************/
    /* video output */

    AVFrame *picture, *tmp_picture;
    uint8_t *video_outbuf;
    int frame_count, video_outbuf_size;

    /* add a video output stream */
    AVStream *add_video_stream(AVFormatContext *oc, enum CodecID codec_id) {
       AVCodecContext *c;
       AVStream *st;
       AVCodec *codec;

       st = avformat_new_stream(oc, NULL);
       if (!st) {
           fprintf(stderr, "Could not alloc stream\n");
           exit(1);
       }

       c = st->codec;

       /* find the video encoder */
       codec = avcodec_find_encoder(codec_id);
       if (!codec) {
           fprintf(stderr, "codec not found\n");
           exit(1);
       }
       avcodec_get_context_defaults3(c, codec);

       c->codec_id = codec_id;

       /* put sample parameters */
       c->bit_rate = 400000;
       /* resolution must be a multiple of two */
       c->width = 352;
       c->height = 288;
       /* time base: this is the fundamental unit of time (in seconds) in terms
        of which frame timestamps are represented. for fixed-fps content,
        timebase should be 1/framerate and timestamp increments should be
        identically 1. */
       c->time_base.den = STREAM_FRAME_RATE;
       c->time_base.num = 1;
       c->gop_size = 12; /* emit one intra frame every twelve frames at most */
       c->pix_fmt = STREAM_PIX_FMT;
       if (c->codec_id == CODEC_ID_MPEG2VIDEO) {
           /* just for testing, we also add B frames */
           c->max_b_frames = 2;
       }
       if (c->codec_id == CODEC_ID_MPEG1VIDEO) {
           /* Needed to avoid using macroblocks in which some coeffs overflow.
            This does not happen with normal video, it just happens here as
            the motion of the chroma plane does not match the luma plane. */
           c->mb_decision = 2;
       }
       // some formats want stream headers to be separate
       if (oc->oformat->flags &amp; AVFMT_GLOBALHEADER)
           c->flags |= CODEC_FLAG_GLOBAL_HEADER;

       return st;
    }

    AVFrame *alloc_picture(enum PixelFormat pix_fmt, int width, int height) {
       AVFrame * picture;
       uint8_t *picture_buf;
       int size;

       picture = avcodec_alloc_frame();
       if (!picture)
           return NULL;
       size = avpicture_get_size(pix_fmt, width, height);
       picture_buf = av_malloc(size);
       if (!picture_buf) {
           av_free(picture);
           return NULL;
       }
       avpicture_fill((AVPicture *) picture, picture_buf, pix_fmt, width, height);
       return picture;
    }

    void open_video(AVFormatContext *oc, AVStream *st) {
       AVCodec *codec;
       AVCodecContext *c;

       c = st->codec;

       /* find the video encoder */
       codec = avcodec_find_encoder(c->codec_id);
       if (!codec) {
           fprintf(stderr, "codec not found\n");
           exit(1);
       }

       /* open the codec */
       if (avcodec_open(c, codec) &lt; 0) {
           fprintf(stderr, "could not open codec\n");
           exit(1);
       }

       video_outbuf = NULL;
       if (!(oc->oformat->flags &amp; AVFMT_RAWPICTURE)) {
           /* allocate output buffer */
           /* XXX: API change will be done */
           /* buffers passed into lav* can be allocated any way you prefer,
            as long as they&#39;re aligned enough for the architecture, and
            they&#39;re freed appropriately (such as using av_free for buffers
            allocated with av_malloc) */
           video_outbuf_size = 200000;
           video_outbuf = av_malloc(video_outbuf_size);
       }

       /* allocate the encoded raw picture */
       picture = alloc_picture(c->pix_fmt, c->width, c->height);
       if (!picture) {
           fprintf(stderr, "Could not allocate picture\n");
           exit(1);
       }

       /* if the output format is not YUV420P, then a temporary YUV420P
        picture is needed too. It is then converted to the required
        output format */
       tmp_picture = NULL;
       if (c->pix_fmt != PIX_FMT_YUV420P) {
           tmp_picture = alloc_picture(PIX_FMT_YUV420P, c->width, c->height);
           if (!tmp_picture) {
               fprintf(stderr, "Could not allocate temporary picture\n");
               exit(1);
           }
       }
    }

    /* prepare a dummy image */
    void fill_yuv_image(AVFrame *pict, int frame_index, int width, int height) {
       int x, y, i;

       i = frame_index;

       /* Y */
       for (y = 0; y &lt; height; y++) {
           for (x = 0; x &lt; width; x++) {
               pict->data[0][y * pict->linesize[0] + x] = x + y + i * 3;
           }
       }

       /* Cb and Cr */
       for (y = 0; y &lt; height / 2; y++) {
           for (x = 0; x &lt; width / 2; x++) {
               pict->data[1][y * pict->linesize[1] + x] = 128 + y + i * 2;
               pict->data[2][y * pict->linesize[2] + x] = 64 + x + i * 5;
           }
       }
    }

    void write_video_frame(AVFormatContext *oc, AVStream *st) {
       int out_size, ret;
       AVCodecContext *c;
       struct SwsContext *img_convert_ctx;

       c = st->codec;

       if (frame_count >= STREAM_NB_FRAMES) {
           /* no more frame to compress. The codec has a latency of a few
            frames if using B frames, so we get the last frames by
            passing the same picture again */
       } else {
           if (c->pix_fmt != PIX_FMT_YUV420P) {
               /* as we only generate a YUV420P picture, we must convert it
                to the codec pixel format if needed */
               if (img_convert_ctx == NULL) {
                   img_convert_ctx = sws_getContext(c->width, c->height,
                           PIX_FMT_YUV420P, c->width, c->height, c->pix_fmt,
                           sws_flags, NULL, NULL, NULL);
                   if (img_convert_ctx == NULL) {
                       fprintf(stderr,
                               "Cannot initialize the conversion context\n");
                       exit(1);
                   }
               }
               fill_yuv_image(tmp_picture, frame_count, c->width, c->height);
               sws_scale(img_convert_ctx, tmp_picture->data,
                       tmp_picture->linesize, 0, c->height, picture->data,
                       picture->linesize);
           } else {
               fill_yuv_image(picture, frame_count, c->width, c->height);
           }
       }

       if (oc->oformat->flags &amp; AVFMT_RAWPICTURE) {
           /* raw video case. The API will change slightly in the near
            future for that. */
           AVPacket pkt;
           av_init_packet(&amp;pkt);

           pkt.flags |= AV_PKT_FLAG_KEY;
           pkt.stream_index = st->index;
           pkt.data = (uint8_t *) picture;
           pkt.size = sizeof(AVPicture);

           ret = av_interleaved_write_frame(oc, &amp;pkt);
       } else {
           /* encode the image */
           out_size = avcodec_encode_video(c, video_outbuf, video_outbuf_size,
                   picture);
           /* if zero size, it means the image was buffered */
           if (out_size > 0) {
               AVPacket pkt;
               av_init_packet(&amp;pkt);

               if (c->coded_frame->pts != AV_NOPTS_VALUE)
                   pkt.pts = av_rescale_q(c->coded_frame->pts, c->time_base,
                           st->time_base);
               if (c->coded_frame->key_frame)
                   pkt.flags |= AV_PKT_FLAG_KEY;
               pkt.stream_index = st->index;
               pkt.data = video_outbuf;
               pkt.size = out_size;

               /* write the compressed frame in the media file */
               ret = av_interleaved_write_frame(oc, &amp;pkt);
           } else {
               ret = 0;
           }
       }
       if (ret != 0) {
           fprintf(stderr, "Error while writing video frame\n");
           exit(1);
       }
       frame_count++;
    }

    void close_video(AVFormatContext *oc, AVStream *st) {
       avcodec_close(st->codec);
       av_free(picture->data[0]);
       av_free(picture);
       if (tmp_picture) {
           av_free(tmp_picture->data[0]);
           av_free(tmp_picture);
       }
       av_free(video_outbuf);
    }

    Android Manifest has been set and init everything.
    Please give me some ideas..
    Some log message to yours http://pastebin.com/uPD5LyH2

  • Link dynamic library and ffmpeg x86_64 version

    24 novembre 2011, par daniel

    I'm working with FFMPEG on Mac OSX, my Mac version is 10.6.8 (i386).

    When I try to compile my C++ code linking a dynamic library :

    g++ sdk.cpp -rpath /usr/local/lib/libinsight.dylib -o sdk

    I get the following error :

    Undefined symbols for architecture x86_64:
     "_main", referenced from:
       start in crt1.10.6.o
     "av_open_input_file(AVFormatContext**, char const*, AVInputFormat*, int,  AVFormatParameters*)", referenced from:
       ffmpeg_open(AVFormatContext**, char const*, int*)in ccCkx9dd.o

     (so forth fo every FFMPEG call)

     ld: symbol(s) not found for architecture x86_64
     collect2: ld returned 1 exit status

    Without linking dylib I have no problem. What's the matter ?

    P.S. ffmpeg version is Mach-O 64-bit executable x86_64