
Recherche avancée
Autres articles (49)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Contribute to translation
13 avril 2011You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
MediaSPIP is currently available in French and English (...)
Sur d’autres sites (6267)
-
Streaming Audio with OpenAL and FFMPEG
28 janvier 2013, par Michael BarthAlright, basically I'm working on a simple video player and I'll probably be asking another question about lagging video\syncing to audio later, but for now I'm having a problem with audio. What I've managed to do to is go through all of the audio frames of a video and add them to a vector buffer then play the audio from that buffer using OpenAL.
This is inefficient and memory hogging and so I need to be able stream it using what I guess is called a rotating buffer. I've ran into problems, one being that there's not a lot of information on streaming with OpenAL let alone the proper way to decode audio with FFMPEG and pipe it to OpenAL. I'm even less comfortable using a vector for my buffer because I honestly have no idea how vectors work in C++, but I some how managed to pull something out of my head to make it work.
Currently I have a Video class that looks like this :
class Video
{
public:
Video(string MOV);
~Video();
bool HasError();
string GetError();
void UpdateVideo();
void RenderToQuad(float Width, float Height);
void CleanTexture();
private:
string FileName;
bool Error;
int videoStream, audioStream, FrameFinished, ErrorLevel;
AVPacket packet;
AVFormatContext *pFormatCtx;
AVCodecContext *pCodecCtx, *aCodecCtx;
AVCodec *pCodec, *aCodec;
AVFrame *pFrame, *pFrameRGB, *aFrame;
GLuint VideoTexture;
struct SwsContext* swsContext;
ALint state;
ALuint bufferID, sourceID;
ALenum format;
ALsizei freq;
vector bufferData;
};The bottom private variables are the relevant ones. Currently I'm decoding audio in the class constructor to an AVFrame and adding the data to bufferData like so :
av_init_packet(&packet);
alGenBuffers(1, &bufferID);
alGenSources(1, &sourceID);
alListener3f(AL_POSITION, 0.0f, 0.0f, 0.0f);
int GotFrame = 0;
freq = aCodecCtx->sample_rate;
if (aCodecCtx->channels == 1)
format = AL_FORMAT_MONO16;
else
format = AL_FORMAT_STEREO16;
while (av_read_frame(pFormatCtx, &packet) >= 0)
{
if (packet.stream_index == audioStream)
{
avcodec_decode_audio4(aCodecCtx, aFrame, &GotFrame, &packet);
bufferData.insert(bufferData.end(), aFrame->data[0], aFrame->data[0] + aFrame->linesize[0]);
av_free_packet(&packet);
}
}
av_seek_frame(pFormatCtx, audioStream, 0, AVSEEK_FLAG_BACKWARD);
alBufferData(bufferID, format, &bufferData[0], static_cast<alsizei>(bufferData.size()), freq);
alSourcei(sourceID, AL_BUFFER, bufferID);
</alsizei>In my UpdateVideo() is where I'm decoding video to an OpenGL texture through the video stream, so it would make sense for me to decode my audio there and stream it :
void Video::UpdateVideo()
{
alGetSourcei(sourceID, AL_SOURCE_STATE, &state);
if (state != AL_PLAYING)
alSourcePlay(sourceID);
if (av_read_frame(pFormatCtx, &packet) >= 0)
{
if (packet.stream_index == videoStream)
{
avcodec_decode_video2(pCodecCtx, pFrame, &FrameFinished, &packet);
if (FrameFinished)
{
sws_scale(swsContext, pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
av_free_packet(&packet);
}
}
else if (packet.stream_index == audioStream)
{
/*
avcodec_decode_audio4(aCodecCtx, aFrame, &FrameFinishd, &packet);
if (FrameFinished)
{
//Update Audio and rotate buffers here!
}
*/
}
glGenTextures(1, &VideoTexture);
glBindTexture(GL_TEXTURE_2D, VideoTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, 3, pCodecCtx->width, pCodecCtx->height, 0, GL_RGB, GL_UNSIGNED_BYTE, pFrameRGB->data[0]);
}
else
{
av_seek_frame(pFormatCtx, videoStream, 0, AVSEEK_FLAG_BACKWARD);
}
}So I guess the big question is how do I do it ? I've got no clue. Any help is appreciated, thank you !
-
Display FFMPEG decoded frame in a GLFW window
17 juin 2020, par InfectoI am implementing the client program of a game where the server sends encoded frames of the game to the client (via UDP), while the client decodes them (via FFMPEG) and displays them in a GLFW window. 
My program has two threads :



- 

- Thread 1 : renders the content of the uint8_t* variable
dataToRender
- Thread 2 : keeps obtaining frames from the server, decodes them and updates
dataToRender
accordingly







Thread 1 does the typical rendering of a GLFW window in a while-loop. I have already tried to display some dummy frame data (a completely red frame) and it worked :



while (!glfwWindowShouldClose(window)) {
 glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
 ...

 glBindTexture(GL_TEXTURE_2D, tex_handle);
 glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, window_width, window_height, 0, GL_RGB, GL_UNSIGNED_BYTE, dataToRender);
 ...
 glfwSwapBuffers(window);
}




Thread 2 is where I am having trouble. I am unable to properly store the decoded frame into my
dataToRender
variable. On top if it, the frame data is originally in YUV format and needs to be converted to RGB. I use FFMPEG'ssws_scale
for that, which also gives me abad dst image pointers
error output in the console. Here's the code snippet responsible for that part :


size_t data_size = frameBuffer.size(); // frameBuffer is a std::vector where I accumulate the frame data chunks
 uint8_t* data = frameBuffer.data(); // convert the vector to a pointer
 picture->format = AV_PIX_FMT_RGB24;
 av_frame_get_buffer(picture, 1);
 while (data_size > 0) {
 int ret = av_parser_parse2(parser, c, &pkt->data, &pkt->size,
 data, data_size, AV_NOPTS_VALUE, AV_NOPTS_VALUE, 0);
 if (ret < 0) {
 fprintf(stderr, "Error while parsing\n");
 exit(1);
 }
 data += ret;
 data_size -= ret;

 if (pkt->size) {
 swsContext = sws_getContext(
 c->width, c->height,
 AV_PIX_FMT_YUV420P, c->width, c->height,
 AV_PIX_FMT_RGB24, SWS_BILINEAR, NULL, NULL, NULL
 );
 uint8_t* rgb24[1] = { data };
 int rgb24_stride[1] = { 3 * c->width };
 sws_scale(swsContext, rgb24, rgb24_stride, 0, c->height, picture->data, picture->linesize);

 decode(c, picture, pkt, outname);
 // TODO: copy content of picture->data[0] to "dataToRender" maybe?
 }
 }




I have already tried doing another sws_scale to copy the content to
dataToRender
and I cannot get rid of thebad dst image pointers
error. Any advice or solution to the problem would be greatly appreciated as I have been stuck for days on this.

- Thread 1 : renders the content of the uint8_t* variable
-
vp9 : remove another optimization branch in iadst16 which causes overflows.
22 avril 2015, par Ronald S. Bultje