Recherche avancée

Médias (0)

Mot : - Tags -/organisation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (49)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Contribute to translation

    13 avril 2011

    You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
    To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
    MediaSPIP is currently available in French and English (...)

Sur d’autres sites (6267)

  • Streaming Audio with OpenAL and FFMPEG

    28 janvier 2013, par Michael Barth

    Alright, basically I'm working on a simple video player and I'll probably be asking another question about lagging video\syncing to audio later, but for now I'm having a problem with audio. What I've managed to do to is go through all of the audio frames of a video and add them to a vector buffer then play the audio from that buffer using OpenAL.

    This is inefficient and memory hogging and so I need to be able stream it using what I guess is called a rotating buffer. I've ran into problems, one being that there's not a lot of information on streaming with OpenAL let alone the proper way to decode audio with FFMPEG and pipe it to OpenAL. I'm even less comfortable using a vector for my buffer because I honestly have no idea how vectors work in C++, but I some how managed to pull something out of my head to make it work.

    Currently I have a Video class that looks like this :

    class Video
    {
       public:
           Video(string MOV);
           ~Video();
           bool HasError();
           string GetError();
           void UpdateVideo();
           void RenderToQuad(float Width, float Height);
           void CleanTexture();
       private:
           string FileName;
           bool Error;
           int videoStream, audioStream, FrameFinished, ErrorLevel;
           AVPacket packet;
           AVFormatContext *pFormatCtx;
           AVCodecContext *pCodecCtx, *aCodecCtx;
           AVCodec *pCodec, *aCodec;
           AVFrame *pFrame, *pFrameRGB, *aFrame;

           GLuint VideoTexture;
           struct SwsContext* swsContext;

           ALint state;
           ALuint bufferID, sourceID;
           ALenum format;
           ALsizei freq;

           vector  bufferData;
    };

    The bottom private variables are the relevant ones. Currently I'm decoding audio in the class constructor to an AVFrame and adding the data to bufferData like so :

       av_init_packet(&packet);

       alGenBuffers(1, &bufferID);
       alGenSources(1, &sourceID);

       alListener3f(AL_POSITION, 0.0f, 0.0f, 0.0f);

       int GotFrame = 0;

       freq = aCodecCtx->sample_rate;
       if (aCodecCtx->channels == 1)
           format = AL_FORMAT_MONO16;
       else
           format = AL_FORMAT_STEREO16;

       while (av_read_frame(pFormatCtx, &packet) >= 0)
       {
           if (packet.stream_index == audioStream)
           {
               avcodec_decode_audio4(aCodecCtx, aFrame, &GotFrame, &packet);
               bufferData.insert(bufferData.end(), aFrame->data[0], aFrame->data[0] + aFrame->linesize[0]);
               av_free_packet(&packet);
           }
       }
       av_seek_frame(pFormatCtx, audioStream, 0, AVSEEK_FLAG_BACKWARD);

       alBufferData(bufferID, format, &amp;bufferData[0], static_cast<alsizei>(bufferData.size()), freq);

       alSourcei(sourceID, AL_BUFFER, bufferID);
    </alsizei>

    In my UpdateVideo() is where I'm decoding video to an OpenGL texture through the video stream, so it would make sense for me to decode my audio there and stream it :

    void Video::UpdateVideo()
    {
       alGetSourcei(sourceID, AL_SOURCE_STATE, &amp;state);
       if (state != AL_PLAYING)
           alSourcePlay(sourceID);
       if (av_read_frame(pFormatCtx, &amp;packet) >= 0)
       {
           if (packet.stream_index == videoStream)
           {
               avcodec_decode_video2(pCodecCtx, pFrame, &amp;FrameFinished, &amp;packet);
               if (FrameFinished)
               {
                   sws_scale(swsContext, pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
                   av_free_packet(&amp;packet);
               }
           }
           else if (packet.stream_index == audioStream)
           {
               /*
               avcodec_decode_audio4(aCodecCtx, aFrame, &amp;FrameFinishd, &amp;packet);
               if (FrameFinished)
               {
                   //Update Audio and rotate buffers here!
               }
               */
           }
           glGenTextures(1, &amp;VideoTexture);
           glBindTexture(GL_TEXTURE_2D, VideoTexture);
           glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
           glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
           glTexImage2D(GL_TEXTURE_2D, 0, 3, pCodecCtx->width, pCodecCtx->height, 0, GL_RGB, GL_UNSIGNED_BYTE, pFrameRGB->data[0]);
       }
       else
       {
           av_seek_frame(pFormatCtx, videoStream, 0, AVSEEK_FLAG_BACKWARD);
       }
    }

    So I guess the big question is how do I do it ? I've got no clue. Any help is appreciated, thank you !

  • Display FFMPEG decoded frame in a GLFW window

    17 juin 2020, par Infecto

    I am implementing the client program of a game where the server sends encoded frames of the game to the client (via UDP), while the client decodes them (via FFMPEG) and displays them in a GLFW window. &#xA;My program has two threads :

    &#xA;&#xA;

      &#xA;
    1. Thread 1 : renders the content of the uint8_t* variable dataToRender
    2. &#xA;

    3. Thread 2 : keeps obtaining frames from the server, decodes them and updates dataToRender accordingly
    4. &#xA;

    &#xA;&#xA;

    Thread 1 does the typical rendering of a GLFW window in a while-loop. I have already tried to display some dummy frame data (a completely red frame) and it worked :

    &#xA;&#xA;

    while (!glfwWindowShouldClose(window)) {&#xA;    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);&#xA;    ...&#xA;&#xA;    glBindTexture(GL_TEXTURE_2D, tex_handle);&#xA;    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, window_width, window_height, 0, GL_RGB, GL_UNSIGNED_BYTE, dataToRender);&#xA;    ...&#xA;    glfwSwapBuffers(window);&#xA;}&#xA;

    &#xA;&#xA;

    Thread 2 is where I am having trouble. I am unable to properly store the decoded frame into my dataToRender variable. On top if it, the frame data is originally in YUV format and needs to be converted to RGB. I use FFMPEG's sws_scale for that, which also gives me a bad dst image pointers error output in the console. Here's the code snippet responsible for that part :

    &#xA;&#xA;

            size_t data_size = frameBuffer.size();  // frameBuffer is a std::vector where I accumulate the frame data chunks&#xA;        uint8_t* data = frameBuffer.data();  // convert the vector to a pointer&#xA;        picture->format = AV_PIX_FMT_RGB24;&#xA;        av_frame_get_buffer(picture, 1);&#xA;        while (data_size > 0) {&#xA;            int ret = av_parser_parse2(parser, c, &amp;pkt->data, &amp;pkt->size,&#xA;                data, data_size, AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);&#xA;            if (ret &lt; 0) {&#xA;                fprintf(stderr, "Error while parsing\n");&#xA;                exit(1);&#xA;            }&#xA;            data &#x2B;= ret;&#xA;            data_size -= ret;&#xA;&#xA;            if (pkt->size) {&#xA;                swsContext = sws_getContext(&#xA;                    c->width, c->height,&#xA;                    AV_PIX_FMT_YUV420P, c->width, c->height,&#xA;                    AV_PIX_FMT_RGB24, SWS_BILINEAR, NULL, NULL, NULL&#xA;                );&#xA;                uint8_t* rgb24[1] = { data };&#xA;                int rgb24_stride[1] = { 3 * c->width };&#xA;                sws_scale(swsContext, rgb24, rgb24_stride, 0, c->height, picture->data, picture->linesize);&#xA;&#xA;                decode(c, picture, pkt, outname);&#xA;                // TODO: copy content of picture->data[0] to "dataToRender" maybe?&#xA;            }&#xA;        }&#xA;

    &#xA;&#xA;

    I have already tried doing another sws_scale to copy the content to dataToRender and I cannot get rid of the bad dst image pointers error. Any advice or solution to the problem would be greatly appreciated as I have been stuck for days on this.

    &#xA;

  • vp9 : remove another optimization branch in iadst16 which causes overflows.

    22 avril 2015, par Ronald S. Bultje
    vp9 : remove another optimization branch in iadst16 which causes overflows.
    

    See sample vp90-2-14-resize-fp-tiles-16-8.webm from the vp9 test vector
    set to reproduce the issue.

    • [DH] libavcodec/x86/vp9itxfm.asm