Recherche avancée

Médias (91)

Autres articles (5)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

  • Other interesting software

    13 avril 2011, par

    We don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
    The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
    We don’t know them, we didn’t try them, but you can take a peek.
    Videopress
    Website : http://videopress.com/
    License : GNU/GPL v2
    Source code : (...)

Sur d’autres sites (2289)

  • FFmpeg + OpenAL - playback streaming sound from video won't work

    28 janvier 2014, par TheSHEEEP

    I am decoding an OGG video (theora & vorbis as codecs) and want to show it on the screen (using Ogre 3D) while playing its sound. I can decode the image stream just fine and the video plays perfectly with the correct frame rate, etc.

    However, I cannot get the sound to play at all with OpenAL.

    Edit : I managed to make the playing sound resemble the actual audio in the video at least somewhat. Updated sample code.

    Edit 2 : I was able to get "almost" correct sound now. I had to set OpenAL to use AL_FORMAT_STEREO_FLOAT32 (after initializing the extension) instead of just STEREO16. Now the sound is "only" extremely high pitched and stuttering, but at the correct speed.

    Here is how I decode audio packets (in a background thread, the equivalent works just fine for the image stream of the video file) :

    //------------------------------------------------------------------------------
    int decodeAudioPacket(  AVPacket& p_packet, AVCodecContext* p_audioCodecContext, AVFrame* p_frame,
                           FFmpegVideoPlayer* p_player, VideoInfo& p_videoInfo)
    {
       // Decode audio frame
       int got_frame = 0;
       int decoded = avcodec_decode_audio4(p_audioCodecContext, p_frame, &got_frame, &p_packet);
       if (decoded < 0)
       {
           p_videoInfo.error = "Error decoding audio frame.";
           return decoded;
       }

       // Frame is complete, store it in audio frame queue
       if (got_frame)
       {
           int bufferSize = av_samples_get_buffer_size(NULL, p_audioCodecContext->channels, p_frame->nb_samples,
                                                       p_audioCodecContext->sample_fmt, 0);

           int64_t duration = p_frame->pkt_duration;
           int64_t dts = p_frame->pkt_dts;

           if (staticOgreLog)
           {
               staticOgreLog->logMessage("Audio frame bufferSize / duration / dts: "
                       + boost::lexical_cast(bufferSize) + " / "
                       + boost::lexical_cast(duration) + " / "
                       + boost::lexical_cast(dts), Ogre::LML_NORMAL);
           }

           // Create the audio frame
           AudioFrame* frame = new AudioFrame();
           frame->dataSize = bufferSize;
           frame->data = new uint8_t[bufferSize];
           if (p_frame->channels == 2)
           {
               memcpy(frame->data, p_frame->data[0], bufferSize >> 1);
               memcpy(frame->data + (bufferSize >> 1), p_frame->data[1], bufferSize >> 1);
           }
           else
           {
               memcpy(frame->data, p_frame->data, bufferSize);
           }
           double timeBase = ((double)p_audioCodecContext->time_base.num) / (double)p_audioCodecContext->time_base.den;
           frame->lifeTime = duration * timeBase;

           p_player->addAudioFrame(frame);
       }

       return decoded;
    }

    So, as you can see, I decode the frame, memcpy it to my own struct, AudioFrame. Now, when the sound is played, I use these audio frame like this :

       int numBuffers = 4;
       ALuint buffers[4];
       alGenBuffers(numBuffers, buffers);
       ALenum success = alGetError();
       if(success != AL_NO_ERROR)
       {
           CONSOLE_LOG("Error on alGenBuffers : " + Ogre::StringConverter::toString(success) + alGetString(success));
           return;
       }

       // Fill a number of data buffers with audio from the stream
       std::vector audioBuffers;
       std::vector<unsigned int="int"> audioBufferSizes;
       unsigned int numReturned = FFMPEG_PLAYER->getDecodedAudioFrames(numBuffers, audioBuffers, audioBufferSizes);

       // Assign the data buffers to the OpenAL buffers
       for (unsigned int i = 0; i &lt; numReturned; ++i)
       {
           alBufferData(buffers[i], _streamingFormat, audioBuffers[i]->data, audioBufferSizes[i], _streamingFrequency);

           success = alGetError();
           if(success != AL_NO_ERROR)
           {
               CONSOLE_LOG("Error on alBufferData : " + Ogre::StringConverter::toString(success) + alGetString(success)
                               + " size: " + Ogre::StringConverter::toString(audioBufferSizes[i]));
               return;
           }
       }

       // Queue the buffers into OpenAL
       alSourceQueueBuffers(_source, numReturned, buffers);
       success = alGetError();
       if(success != AL_NO_ERROR)
       {
           CONSOLE_LOG("Error queuing streaming buffers: " + Ogre::StringConverter::toString(success) + alGetString(success));
           return;
       }
    }

    alSourcePlay(_source);
    </unsigned>

    The format and frequency I give to OpenAL are AL_FORMAT_STEREO_FLOAT32 (it is a stereo sound stream, and I did initialize the FLOAT32 extension) and 48000 (which is the sample rate of the AVCodecContext of the audio stream).

    And during playback, I do the following to refill OpenAL's buffers :

    ALint numBuffersProcessed;

    // Check if OpenAL is done with any of the queued buffers
    alGetSourcei(_source, AL_BUFFERS_PROCESSED, &amp;numBuffersProcessed);
    if(numBuffersProcessed &lt;= 0)
       return;

    // Fill a number of data buffers with audio from the stream
    std::vector audioBuffers;
    std::vector<unsigned int="int"> audioBufferSizes;
    unsigned int numFilled = FFMPEG_PLAYER->getDecodedAudioFrames(numBuffersProcessed, audioBuffers, audioBufferSizes);

    // Assign the data buffers to the OpenAL buffers
    ALuint buffer;
    for (unsigned int i = 0; i &lt; numFilled; ++i)
    {
       // Pop the oldest queued buffer from the source,
       // fill it with the new data, then re-queue it
       alSourceUnqueueBuffers(_source, 1, &amp;buffer);

       ALenum success = alGetError();
       if(success != AL_NO_ERROR)
       {
           CONSOLE_LOG("Error Unqueuing streaming buffers: " + Ogre::StringConverter::toString(success));
           return;
       }

       alBufferData(buffer, _streamingFormat, audioBuffers[i]->data, audioBufferSizes[i], _streamingFrequency);

       success = alGetError();
       if(success != AL_NO_ERROR)
       {
           CONSOLE_LOG("Error on re- alBufferData: " + Ogre::StringConverter::toString(success));
           return;
       }

       alSourceQueueBuffers(_source, 1, &amp;buffer);

       success = alGetError();
       if(success != AL_NO_ERROR)
       {
           CONSOLE_LOG("Error re-queuing streaming buffers: " + Ogre::StringConverter::toString(success) + " "
                       + alGetString(success));
           return;
       }
    }

    // Make sure the source is still playing,
    // and restart it if needed.
    ALint playStatus;
    alGetSourcei(_source, AL_SOURCE_STATE, &amp;playStatus);
    if(playStatus != AL_PLAYING)
       alSourcePlay(_source);
    </unsigned>

    As you can see, I do quite heavy error checking. But I do not get any errors, neither from OpenAL nor from FFmpeg.
    Edit : What I hear somewhat resembles the actual audio from the video, but VERY high pitched and stuttering VERY much. Also, it seems to be playing on top of TV noise. Very strange. Plus, it is playing much slower than the correct audio would.
    Edit : 2 After using AL_FORMAT_STEREO_FLOAT32, the sound plays at the correct speed, but is still very high pitched and stuttering (though less than before).

    The video itself is not broken, it can be played fine on any player. OpenAL can also play *.way files just fine in the same application, so it is also working.

    Any ideas what could be wrong here or how to do this correctly ?

    My only guess is that somehow, FFmpeg's decode function does not produce data OpenGL can read. But this is as far as the FFmpeg decode example goes, so I don't know what's missing. As I understand it, the decode_audio4 function decodes the frame to raw data. And OpenAL should be able to work with RAW data (or rather, doesn't work with anything else).

  • VideoView does not play Audio in Video properly

    30 janvier 2014, par Jay

    I have an *.mp4 file which is duration of 2 min. Now it has audio track starting from 30 seconds upto 1.10 min. The rest before 30s and after 1.10min is blank.

    Now the problem is when I try to play it in videoview or mediaplayer then, it plays audio right from beginning of the video rather from its actual position. I tried this on multiple phones with same result.

    When I play the same video in MXPlayer or in Windows(VLC) ; it plays properly.

    What is the solution to this problem ?

    Edit

    I have used -itsoffset command of Ffmpeg for achieving above video.

    ffmpeg -y -i a.mp4 -itsoffset 00:00:30 sng.m4a -map 0:0 -map 1:0 -c:v copy -preset ultrafast out.mp4

    Thanks in advance.

  • Android. Problems with AudioTrack class. Sound sometimes lost

    29 janvier 2014, par bukka.wh

    I have found open source video player for Android, which uses ffmpeg to decode video.
    I have some problems with audio, that sometimes plays with jerks, but video picture is shown well. The basic idea of player is that audio and video are decoded in two different streams, and then in the third stream the are passed back, video picture is shown on SurfaceView and video sound is passed in byte array to AudioTrack and then plays. But sometimes sound is lost or playing with jerks. Can anyone give me start point for what to do (some basic concepts). May be I should change buffer size for AudioTrack or add some flags to it. Here is a piece of code, where AudioTrack class is created.

    private AudioTrack prepareAudioTrack(int sampleRateInHz,
           int numberOfChannels) {

       for (;;) {
           int channelConfig;
           if (numberOfChannels == 1) {
               channelConfig = AudioFormat.CHANNEL_OUT_MONO;
           } else if (numberOfChannels == 2) {
               channelConfig = AudioFormat.CHANNEL_OUT_STEREO;
           } else if (numberOfChannels == 3) {
               channelConfig = AudioFormat.CHANNEL_OUT_FRONT_CENTER
                       | AudioFormat.CHANNEL_OUT_FRONT_RIGHT
                       | AudioFormat.CHANNEL_OUT_FRONT_LEFT;
           } else if (numberOfChannels == 4) {
               channelConfig = AudioFormat.CHANNEL_OUT_QUAD;
           } else if (numberOfChannels == 5) {
               channelConfig = AudioFormat.CHANNEL_OUT_QUAD
                       | AudioFormat.CHANNEL_OUT_LOW_FREQUENCY;
           } else if (numberOfChannels == 6) {
               channelConfig = AudioFormat.CHANNEL_OUT_5POINT1;
           } else if (numberOfChannels == 8) {
               channelConfig = AudioFormat.CHANNEL_OUT_7POINT1;
           } else {
               channelConfig = AudioFormat.CHANNEL_OUT_STEREO;
           }
           try {
               Log.d("MyLog","Creating Audio player");
               int minBufferSize = AudioTrack.getMinBufferSize(sampleRateInHz,
                       channelConfig, AudioFormat.ENCODING_PCM_16BIT);
               AudioTrack audioTrack = new AudioTrack(
                       AudioManager.STREAM_MUSIC, sampleRateInHz,
                       channelConfig, AudioFormat.ENCODING_PCM_16BIT,
                       minBufferSize, AudioTrack.MODE_STREAM);
               return audioTrack;
           } catch (IllegalArgumentException e) {
               if (numberOfChannels > 2) {
                   numberOfChannels = 2;
               } else if (numberOfChannels > 1) {
                   numberOfChannels = 1;
               } else {
                   throw e;
               }
           }
       }
    }

    And this is a piece of native code where sound bytes are written to AudioTrack

    int player_write_audio(struct DecoderData *decoder_data, JNIEnv *env,
       int64_t pts, uint8_t *data, int data_size, int original_data_size) {
    struct Player *player = decoder_data->player;
    int stream_no = decoder_data->stream_no;
    int err = ERROR_NO_ERROR;
    int ret;
    AVCodecContext * c = player->input_codec_ctxs[stream_no];
    AVStream *stream = player->input_streams[stream_no];
    LOGI(10, "player_write_audio Writing audio frame")

    jbyteArray samples_byte_array = (*env)->NewByteArray(env, data_size);
    if (samples_byte_array == NULL) {
       err = -ERROR_NOT_CREATED_AUDIO_SAMPLE_BYTE_ARRAY;
       goto end;
    }

    if (pts != AV_NOPTS_VALUE) {
       player->audio_clock = av_rescale_q(pts, stream->time_base, AV_TIME_BASE_Q);
       LOGI(9, "player_write_audio - read from pts")
    } else {
       int64_t sample_time = original_data_size;
       sample_time *= 1000000ll;
       sample_time /= c->channels;
       sample_time /= c->sample_rate;
       sample_time /= av_get_bytes_per_sample(c->sample_fmt);
       player->audio_clock += sample_time;
       LOGI(9, "player_write_audio - added")
    }
    enum WaitFuncRet wait_ret = player_wait_for_frame(player,
           player->audio_clock + AUDIO_TIME_ADJUST_US, stream_no);
    if (wait_ret == WAIT_FUNC_RET_SKIP) {
       goto end;
    }

    LOGI(10, "player_write_audio Writing sample data")

    jbyte *jni_samples = (*env)->GetByteArrayElements(env, samples_byte_array,
           NULL);
    memcpy(jni_samples, data, data_size);
    (*env)->ReleaseByteArrayElements(env, samples_byte_array, jni_samples, 0);

    LOGI(10, "player_write_audio playing audio track");
    ret = (*env)->CallIntMethod(env, player->audio_track,
           player->audio_track_write_method, samples_byte_array, 0, data_size);
    jthrowable exc = (*env)->ExceptionOccurred(env);
    if (exc) {
       err = -ERROR_PLAYING_AUDIO;
       LOGE(3, "Could not write audio track: reason in exception");
       // TODO maybe release exc
       goto free_local_ref;
    }
    if (ret &lt; 0) {
       err = -ERROR_PLAYING_AUDIO;
       LOGE(3,
               "Could not write audio track: reason: %d look in AudioTrack.write()", ret);
       goto free_local_ref;
    }

    free_local_ref:
    LOGI(10, "player_write_audio releasing local ref");
    (*env)->DeleteLocalRef(env, samples_byte_array);

    end: return err;

    }

    I will be pleased for any help !!!! Thank you very much !!!!