Recherche avancée

Médias (91)

Autres articles (74)

  • Organiser par catégorie

    17 mai 2013, par

    Dans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
    Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
    Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...)

  • Récupération d’informations sur le site maître à l’installation d’une instance

    26 novembre 2010, par

    Utilité
    Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
    Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

Sur d’autres sites (5153)

  • Using ffmpeg to convert sound files for use in an android app

    10 janvier 2012, par stefs

    short : i'm trying to simply play a sound file converted with ffmpeg in my android app, but happen to have problems getting it to work.

    long : we have an iphone app and an android app doing the same thing, and i have to port the feature playing a sound on an user interaction. i have the source file in the aiff format, and tried to convert it to mp3 for android. but the app keeps crashing when it tries to load the file

    AssetFileDescriptor fileDescriptor = context.getResources().openRawResourceFd(resid);
    final MediaPlayer mp = new MediaPlayer();
    mp.setDataSource(fileDescriptor.getFileDescriptor(), fileDescriptor.getStartOffset(), fileDescriptor.getLength());
    fileDescriptor.close();
    mp.prepare();

    more specifically, mp.setDataSource crashes. some digging around led me to believe that something's wrong with the encoding. the sound file itself resides in res/raw.

    11-29 17:11:48.012: ERROR/SoundManager(15580): java.io.IOException: setDataSourceFD failed.: status=0x80000000
    11-29 17:11:48.012: ERROR/SoundManager(15580):     at android.media.MediaPlayer.setDataSource(Native Method)
    ...

    what i tried :

    • using a different mp3 that's already used with the same code in a different place. this works.
    • converted it to wav file. this didn't cause the app to crash, but it neither played a sound. that might be a different problem.
    • converted it to ogg ; crashed

    so, the the ffmpeg conversion parameters are as follows :

    $ ffmpeg -i click_24db.aif -f mp3 ~/foobar/wheel_click.mp3
    ffmpeg version 0.7.8, Copyright (c) 2000-2011 the FFmpeg developers
     built on Nov 24 2011 14:31:00 with gcc 4.2.1 (Apple Inc. build 5666) (dot 3)
     configuration: --prefix=/opt/local --enable-gpl --enable-postproc --enable-swscale --enable-avfilter --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-libdirac --enable-libschroedinger --enable-libopenjpeg --enable-libxvid --enable-libx264 --enable-libvpx --enable-libspeex --mandir=/opt/local/share/man --enable-shared --enable-pthreads --cc=/usr/bin/gcc-4.2 --arch=x86_64 --enable-yasm
     libavutil    50. 43. 0 / 50. 43. 0
     libavcodec   52.123. 0 / 52.123. 0
     libavformat  52.111. 0 / 52.111. 0
     libavdevice  52.  5. 0 / 52.  5. 0
     libavfilter   1. 80. 0 /  1. 80. 0
     libswscale    0. 14. 1 /  0. 14. 1
     libpostproc  51.  2. 0 / 51.  2. 0
    Input #0, aiff, from 'click_24db.aif':
     Duration: 00:00:00.01, start: 0.000000, bitrate: 1570 kb/s
       Stream #0.0: Audio: pcm_s16be, 44100 Hz, 2 channels, s16, 1411 kb/s
    Output #0, mp3, to '/Users/xyz/foobar/wheel_click.mp3':
     Metadata:
       TSSE            : Lavf52.111.0
       Stream #0.0: Audio: libmp3lame, 44100 Hz, 2 channels, s16, 64 kb/s
    Stream mapping:
     Stream #0.0 -> #0.0
    Press [q] to stop, [?] for help
    size=       1kB time=00:00:00.05 bitrate=  92.9kbits/s    
    video:0kB audio:0kB global headers:0kB muxing overhead 45.563549%

    the resulting file plays nice in itunes, does not play in vlc and crashes when loaded with the android.media.MediaPlayer (note : i first tried it with the SoundPool lib, with both mp3 and ogg, but that didn't work either).

    i also tried the following paramters, which didn't work :

    ffmpeg -i inputfile.aif -f mp3 -acodec libmp3lame -ab 192000 -ar 44100 outputfile.mp3

    i'm working on osx, built ffmpeg with macports today, android api level is 7 (google api, 2.1-update1). looking at the "supported formats" table on dev.android didn't indicate my file to be out of the spec, but i may be mistaken in that.

    i don't have the slightest clue regarding bitrates and so on, so could anybody please point me to the right combination of ffmpeg parameters to get a working mp3 for android ? i don't care if the resulting file would be mp3, ogg or 3gp or whatever.

  • Envelope pattern in SoX (Sound eXchange) or ffmpeg

    26 mai 2016, par pJay

    I’ve been using SoX to generate white noise. I’m after a way of modulating the volume across the entire track in a way that will create a pattern similar to this :

    White noise envelope effect

    I’ve experimented with fade, but that fades in to 100% volume and fades out to 0% volume, which is just a pain in this instance.

    The tremolo effect isn’t quite what I’m after either, as the frequency of the pattern will be changing over time.

    The only other alternative is to split the white noise file into separate files, apply fade and then apply trim to either end so it doesn’t fade all the way, but this seems like a lot of unnecessary processing.

    I’ve been checking out this example Using SoX to change the volume level of a range of time in an audio file, but I don’t think it’s quite what I’m after.

    I’m using the command-line in Ubuntu with SoX, but I’m open to suggestions with ffmpeg, or any other Linux based command-line solution.

  • Android. Problems with AudioTrack class. Sound sometimes lost

    29 janvier 2014, par bukka.wh

    I have found open source video player for Android, which uses ffmpeg to decode video.
    I have some problems with audio, that sometimes plays with jerks, but video picture is shown well. The basic idea of player is that audio and video are decoded in two different streams, and then in the third stream the are passed back, video picture is shown on SurfaceView and video sound is passed in byte array to AudioTrack and then plays. But sometimes sound is lost or playing with jerks. Can anyone give me start point for what to do (some basic concepts). May be I should change buffer size for AudioTrack or add some flags to it. Here is a piece of code, where AudioTrack class is created.

    private AudioTrack prepareAudioTrack(int sampleRateInHz,
           int numberOfChannels) {

       for (;;) {
           int channelConfig;
           if (numberOfChannels == 1) {
               channelConfig = AudioFormat.CHANNEL_OUT_MONO;
           } else if (numberOfChannels == 2) {
               channelConfig = AudioFormat.CHANNEL_OUT_STEREO;
           } else if (numberOfChannels == 3) {
               channelConfig = AudioFormat.CHANNEL_OUT_FRONT_CENTER
                       | AudioFormat.CHANNEL_OUT_FRONT_RIGHT
                       | AudioFormat.CHANNEL_OUT_FRONT_LEFT;
           } else if (numberOfChannels == 4) {
               channelConfig = AudioFormat.CHANNEL_OUT_QUAD;
           } else if (numberOfChannels == 5) {
               channelConfig = AudioFormat.CHANNEL_OUT_QUAD
                       | AudioFormat.CHANNEL_OUT_LOW_FREQUENCY;
           } else if (numberOfChannels == 6) {
               channelConfig = AudioFormat.CHANNEL_OUT_5POINT1;
           } else if (numberOfChannels == 8) {
               channelConfig = AudioFormat.CHANNEL_OUT_7POINT1;
           } else {
               channelConfig = AudioFormat.CHANNEL_OUT_STEREO;
           }
           try {
               Log.d("MyLog","Creating Audio player");
               int minBufferSize = AudioTrack.getMinBufferSize(sampleRateInHz,
                       channelConfig, AudioFormat.ENCODING_PCM_16BIT);
               AudioTrack audioTrack = new AudioTrack(
                       AudioManager.STREAM_MUSIC, sampleRateInHz,
                       channelConfig, AudioFormat.ENCODING_PCM_16BIT,
                       minBufferSize, AudioTrack.MODE_STREAM);
               return audioTrack;
           } catch (IllegalArgumentException e) {
               if (numberOfChannels > 2) {
                   numberOfChannels = 2;
               } else if (numberOfChannels > 1) {
                   numberOfChannels = 1;
               } else {
                   throw e;
               }
           }
       }
    }

    And this is a piece of native code where sound bytes are written to AudioTrack

    int player_write_audio(struct DecoderData *decoder_data, JNIEnv *env,
       int64_t pts, uint8_t *data, int data_size, int original_data_size) {
    struct Player *player = decoder_data->player;
    int stream_no = decoder_data->stream_no;
    int err = ERROR_NO_ERROR;
    int ret;
    AVCodecContext * c = player->input_codec_ctxs[stream_no];
    AVStream *stream = player->input_streams[stream_no];
    LOGI(10, "player_write_audio Writing audio frame")

    jbyteArray samples_byte_array = (*env)->NewByteArray(env, data_size);
    if (samples_byte_array == NULL) {
       err = -ERROR_NOT_CREATED_AUDIO_SAMPLE_BYTE_ARRAY;
       goto end;
    }

    if (pts != AV_NOPTS_VALUE) {
       player->audio_clock = av_rescale_q(pts, stream->time_base, AV_TIME_BASE_Q);
       LOGI(9, "player_write_audio - read from pts")
    } else {
       int64_t sample_time = original_data_size;
       sample_time *= 1000000ll;
       sample_time /= c->channels;
       sample_time /= c->sample_rate;
       sample_time /= av_get_bytes_per_sample(c->sample_fmt);
       player->audio_clock += sample_time;
       LOGI(9, "player_write_audio - added")
    }
    enum WaitFuncRet wait_ret = player_wait_for_frame(player,
           player->audio_clock + AUDIO_TIME_ADJUST_US, stream_no);
    if (wait_ret == WAIT_FUNC_RET_SKIP) {
       goto end;
    }

    LOGI(10, "player_write_audio Writing sample data")

    jbyte *jni_samples = (*env)->GetByteArrayElements(env, samples_byte_array,
           NULL);
    memcpy(jni_samples, data, data_size);
    (*env)->ReleaseByteArrayElements(env, samples_byte_array, jni_samples, 0);

    LOGI(10, "player_write_audio playing audio track");
    ret = (*env)->CallIntMethod(env, player->audio_track,
           player->audio_track_write_method, samples_byte_array, 0, data_size);
    jthrowable exc = (*env)->ExceptionOccurred(env);
    if (exc) {
       err = -ERROR_PLAYING_AUDIO;
       LOGE(3, "Could not write audio track: reason in exception");
       // TODO maybe release exc
       goto free_local_ref;
    }
    if (ret < 0) {
       err = -ERROR_PLAYING_AUDIO;
       LOGE(3,
               "Could not write audio track: reason: %d look in AudioTrack.write()", ret);
       goto free_local_ref;
    }

    free_local_ref:
    LOGI(10, "player_write_audio releasing local ref");
    (*env)->DeleteLocalRef(env, samples_byte_array);

    end: return err;

    }

    I will be pleased for any help !!!! Thank you very much !!!!