
Recherche avancée
Autres articles (104)
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Les vidéos
21 avril 2011, parComme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...) -
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...)
Sur d’autres sites (9164)
-
How to use ffmpeg in flask server to convert between audio formats without writing files to disk ?
14 avril 2020, par AthwulfI have successfully managed to use ffmpeg in python to convert the format of some audio files like this :



command = "ffmpeg -i audio.wav -vn -acodec pcm_s16le output.wav"
subprocess.call(command, shell=True)




However I want to do this in memory and avoid saving the input and output files to disk.



I have found the follwing code to do such a thing (Passing python’s file like object to ffmpeg via subprocess) :



command = ['ffmpeg', '-y', '-i', '-', '-f', 'wav', '-']
process = subprocess.Popen(command, stdin=subprocess.PIPE)
wav, errordata = process.communicate(file)




But I am struggeling to use this in my context.



I am receiving the file on a server as part of a multipart/form-data request.



@server.route("/api/getText", methods=["POST"])
def api():
 if "multipart/form-data" not in request.content_type:
 return Response("invalid content type: {}".format(request.content_type))
 # check file format
 file = request.files['file']
 if file:
 print('**found file', file.filename)




Now I have the file as a FileStorage Object (https://tedboy.github.io/flask/generated/generated/werkzeug.FileStorage.html). This object has a stream, which can be accessed by using the read method. So I thought I might be able to use this as input for ffmpeg like so :



f = file.read()
command = ['ffmpeg', '-y', '-i', '-', '-f', 'wav', '-']
process = subprocess.Popen(command, stdin=subprocess.PIPE)
wav, errordata = process.communicate(f)




However this yields the the following error :



AssertionError: Given audio file must be a filename string or a file-like object




I have also tried another approach which I found online, using io.BytesIO, to which I can't find the source anymore :



memfile = io.BytesIO() # create file-object
memfile.write(file.read()) # write in file-object
memfile.seek(0) # move to beginning so it will read from beginning




And then trying this again :



command = ['ffmpeg', '-y', '-i', '-', '-f', 'wav', '-']
process = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
wav, errordata = process.communicate(memfile)




This gets me the following error :



TypeError: a bytes-like object is required, not '_io.BytesIO'




Does anyone have an idea of how to do this ?



Update



The first error message is actually not a error message thrown by ffmpeg. As v25 pointed out correctly in his answer the first approach also returns a bytes object and is also a valid solution.



The error message got thrown by a library (speech_recognition) when trying to work with the modified file. In the unlikely case of someone coming across the same problem here the solution :



The bytes objected returned by ffmpeg (variable wav) has to be turned into a file-like object as the error message implies. This can easily be done like this :



memfileOutput = io.BytesIO(wav)


-
Encoding .png images with h264 to a file on disk
21 février 2021, par xyfixCan somebody help me to find out why I end up with a file on disk that is only 24 kb and not readable by vlc or so, while I send valid YUV images to the codec. I have added the .h and .cpp file. Up till "avcodec_receive_packet" everything seems to be OK. The function call "avcodec_send_frame" returns 0, so that must be OK but "avcodec_receive_packet" returns -11. If I flush the encoder (currently commented) then "avcodec_receive_packet" returns 0 and I can see encoded data if I store it on disk. Also the input image to the encoder is also correct (currently commented) and checked. I'm aiming for an intra-frame encoding, so I should get the encoded frame data back, but I don't get anything back even if I send 24 images to it.


.h file


#ifndef MOVIECODEC_H
#define MOVIECODEC_H

#include 

extern "C"
{
 #include "Codec/include/libavcodec/avcodec.h"
 #include "Codec/include/libavdevice/avdevice.h"
 #include "Codec/include/libavformat/avformat.h"
 #include "Codec/include/libavutil/avutil.h"
 #include "Codec/include/libavutil/imgutils.h"
 #include "Codec/include/libswscale/swscale.h"
}


class MovieCodec
{
public:

 MovieCodec(const char *filename);

 ~MovieCodec();

 void encodeImage( const cv::Mat& image );

 void close();
 
private :

 void add_stream();

 void openVideoCodec();

 void write_video_frame(const cv::Mat& image);

 void createFrame( const cv::Mat& image );

private:

 static int s_frameCount;

 int m_timeVideo = 0;

 std::string m_filename;

 FILE* m_file;

 AVCodec* m_encoder = NULL;

 AVOutputFormat* m_outputFormat = NULL;

 AVFormatContext* m_formatCtx = NULL;

 AVCodecContext* m_codecCtx = NULL;

 AVStream* m_streamOut = NULL;

 AVFrame* m_frame = NULL;

 AVPacket* m_packet = NULL;

};



.cpp file


#ifndef MOVIECODEC_CPP
#define MOVIECODEC_CPP

#include "moviecodec.h"


#define STREAM_DURATION 5.0
#define STREAM_FRAME_RATE 24
#define STREAM_NB_FRAMES ((int)(STREAM_DURATION * STREAM_FRAME_RATE))
#define STREAM_PIX_FMT AV_PIX_FMT_YUV420P /* default pix_fmt */


static int sws_flags = SWS_BICUBIC;
int MovieCodec::s_frameCount = 0;

MovieCodec::MovieCodec( const char* filename ) :
 m_filename( filename ),
 m_encoder( avcodec_find_encoder( AV_CODEC_ID_H264 ))
{
 av_log_set_level(AV_LOG_VERBOSE);

 int ret(0);

 m_file = fopen( m_filename.c_str(), "wb");

 // allocate the output media context
 ret = avformat_alloc_output_context2( &m_formatCtx, m_outputFormat, NULL, m_filename.c_str());

 if (!m_formatCtx)
 return;

 m_outputFormat = m_formatCtx->oformat;

 // Add the video stream using H264 codec
 add_stream();

 // Open video codec and allocate the necessary encode buffers
 if (m_streamOut)
 openVideoCodec();

 av_dump_format( m_formatCtx, 0, m_filename.c_str(), 1);

 // Open the output media file, if needed
 if (!( m_outputFormat->flags & AVFMT_NOFILE))
 {
 ret = avio_open( &m_formatCtx->pb, m_filename.c_str(), AVIO_FLAG_WRITE);

 if (ret < 0)
 {
 char error[255];
 ret = av_strerror( ret, error, 255);
 fprintf(stderr, "Could not open '%s': %s\n", m_filename.c_str(), error);
 return ;
 }
 }
 else
 {
 return;
 }

 m_formatCtx->flush_packets = 1;

 ret = avformat_write_header( m_formatCtx, NULL );

 if (ret < 0)
 {
 char error[255];
 av_strerror(ret, error, 255);
 fprintf(stderr, "Error occurred when opening output file: %s\n", error);
 return;
 }


 if ( m_frame )
 m_frame->pts = 0;
}



MovieCodec::~MovieCodec()
{
 close();
}



void MovieCodec::encodeImage(const cv::Mat &image)
{
 // Compute video time from last added video frame
 m_timeVideo = (double)m_frame->pts) * av_q2d(m_streamOut->time_base);

 // Stop media if enough time
 if (!m_streamOut /*|| m_timeVideo >= STREAM_DURATION*/)
 return;

 // Add a video frame
 write_video_frame( image );

 // Increase frame pts according to time base
 m_frame->pts += av_rescale_q(1, m_codecCtx->time_base, m_streamOut->time_base);
}


void MovieCodec::close()
{
 int ret( 0 );

 // Write media trailer
// if( m_formatCtx )
// ret = av_write_trailer( m_formatCtx );

 /* flush the encoder */
 ret = avcodec_send_frame(m_codecCtx, NULL);

 /* Close each codec. */
 if ( m_streamOut )
 {
 av_free( m_frame->data[0]);
 av_free( m_frame );
 }

 if (!( m_outputFormat->flags & AVFMT_NOFILE))
 /* Close the output file. */
 ret = avio_close( m_formatCtx->pb);


 /* free the stream */
 avformat_free_context( m_formatCtx );

 fflush( m_file );
}


void MovieCodec::createFrame( const cv::Mat& image )
{
 /**
 * \note allocate frame
 */
 m_frame = av_frame_alloc();
 m_frame->format = STREAM_PIX_FMT;
 m_frame->width = image.cols();
 m_frame->height = image.rows();
 m_frame->pict_type = AV_PICTURE_TYPE_I;
 int ret = av_image_alloc(m_frame->data, m_frame->linesize, m_frame->width, m_frame->height, STREAM_PIX_FMT, 1);

 if (ret < 0)
 {
 return;
 }

 struct SwsContext* sws_ctx = sws_getContext((int)image.cols(), (int)image.rows(), AV_PIX_FMT_RGB24,
 (int)image.cols(), (int)image.rows(), STREAM_PIX_FMT, 0, NULL, NULL, NULL);

 const uint8_t* rgbData[1] = { (uint8_t* )image.getData() };
 int rgbLineSize[1] = { 3 * image.cols() };

 sws_scale(sws_ctx, rgbData, rgbLineSize, 0, image.rows(), m_frame->data, m_frame->linesize);
}


/* Add an output stream. */
void MovieCodec::add_stream()
{
 AVCodecID codecId = AV_CODEC_ID_H264;

 if (!( m_encoder ))
 {
 fprintf(stderr, "Could not find encoder for '%s'\n",
 avcodec_get_name(codecId));
 return;
 }

 // Get the stream for codec
 m_streamOut = avformat_new_stream(m_formatCtx, m_encoder);

 if (!m_streamOut) {
 fprintf(stderr, "Could not allocate stream\n");
 return;
 }

 m_streamOut->id = m_formatCtx->nb_streams - 1;

 m_codecCtx = avcodec_alloc_context3( m_encoder);

 m_streamOut->codecpar->codec_id = codecId;
 m_streamOut->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
 m_streamOut->codecpar->bit_rate = 400000;
 m_streamOut->codecpar->width = 800;
 m_streamOut->codecpar->height = 640;
 m_streamOut->codecpar->format = STREAM_PIX_FMT;
 m_streamOut->codecpar->codec_tag = 0x31637661;
 m_streamOut->codecpar->video_delay = 0;
 m_streamOut->time_base = { 1, STREAM_FRAME_RATE };


 avcodec_parameters_to_context( m_codecCtx, m_streamOut->codecpar);
 
 m_codecCtx->gop_size = 0; /* emit one intra frame every twelve frames at most */
 m_codecCtx->max_b_frames = 0;
 m_codecCtx->time_base = { 1, STREAM_FRAME_RATE };
 m_codecCtx->framerate = { STREAM_FRAME_RATE, 1 };
 m_codecCtx->pix_fmt = STREAM_PIX_FMT;



 if (m_streamOut->codecpar->codec_id == AV_CODEC_ID_H264)
 {
 av_opt_set( m_codecCtx, "preset", "ultrafast", 0 );
 av_opt_set( m_codecCtx, "vprofile", "baseline", 0 );
 av_opt_set( m_codecCtx, "tune", "zerolatency", 0 );
 }

// /* Some formats want stream headers to be separate. */
// if (m_formatCtx->oformat->flags & AVFMT_GLOBALHEADER)
// m_codecCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;


}


void MovieCodec::openVideoCodec()
{
 int ret;

 /* open the codec */
 ret = avcodec_open2(m_codecCtx, m_encoder, NULL);

 if (ret < 0)
 {
 char error[255];
 av_strerror(ret, error, 255);
 fprintf(stderr, "Could not open video codec: %s\n", error);
 }
}



void MovieCodec::write_video_frame( const cv::Mat& image )
{
 int ret;

 createFrame( image );


 if (m_formatCtx->oformat->flags & 0x0020 )
 {
 /* Raw video case - directly store the picture in the packet */
 AVPacket pkt;
 av_init_packet(&pkt);

 pkt.flags |= AV_PKT_FLAG_KEY;
 pkt.stream_index = m_streamOut->index;
 pkt.data = m_frame->data[0];
 pkt.size = sizeof(AVPicture);

// ret = av_interleaved_write_frame(m_formatCtx, &pkt);
 ret = av_write_frame( m_formatCtx, &pkt );
 }
 else
 {
 AVPacket pkt;
 av_init_packet(&pkt);

 /* encode the image */

//cv::Mat yuv420p( m_frame->height + m_frame->height/2, m_frame->width, CV_8UC1, m_frame->data[0]);
//cv::Mat cvmIm;
//cv::cvtColor(yuv420p,cvmIm,CV_YUV420p2BGR);
//cv::imwrite("c:\\tmp\\YUVoriginal.png", cvmIm);

 ret = avcodec_send_frame(m_codecCtx, m_frame);

 if (ret < 0)
 {
 char error[255];
 av_strerror(ret, error, 255);
 fprintf(stderr, "Error encoding video frame: %s\n", error);
 return;
 }

 /* If size is zero, it means the image was buffered. */
// ret = avcodec_receive_packet(m_codecCtx, &pkt);

 do
 {
 ret = avcodec_receive_packet(m_codecCtx, &pkt);

 if (ret == 0)
 {
 ret = av_write_frame( m_formatCtx, &pkt );
 av_packet_unref(&pkt);

 break;
 }
// else if ((ret < 0) && (ret != AVERROR(EAGAIN)))
// {
// return;
// }
// else if (ret == AVERROR(EAGAIN))
// {
// /* flush the encoder */
// ret = avcodec_send_frame(m_codecCtx, NULL);
//
// if (0 > ret)
// return;
// }
 } while (ret == 0);

 if( !ret && pkt.size)
 {
 pkt.stream_index = m_streamOut->index;

 /* Write the compressed frame to the media file. */
// ret = av_interleaved_write_frame(m_formatCtx, &pkt);
 ret = av_write_frame( m_formatCtx, &pkt );
 }
 else
 {
 ret = 0;
 }
 }

 if (ret != 0)
 {
 char error[255];
 av_strerror(ret, error, 255);
 fprintf(stderr, "Error while writing video frame: %s\n", error);
 return;
 }

 s_frameCount++;
}



-
Encoding .png images with h264 to a file on disk
19 février 2021, par xyfixCan somebody help me to find out why I end up with a file on disk that is only 24 kb and not readable by vlc or so, while I send valid YUV images to the codec. I have added the .h and .cpp file. Up till "avcodec_receive_packet" everything seems to be OK. The function call "avcodec_send_frame" returns 0, so that must be OK but "avcodec_receive_packet" returns -11. If I flush the encoder (currently commented) then "avcodec_receive_packet" returns 0 and I can see encoded data if I store it on disk. Also the input image to the encoder is also correct (currently commented) and checked. I'm aiming for an intra-frame encoding, so I should get the encoded frame data back, but I don't get anything back even if I send 24 images to it.


.h file


#ifndef MOVIECODEC_H
#define MOVIECODEC_H

#include 

extern "C"
{
 #include "Codec/include/libavcodec/avcodec.h"
 #include "Codec/include/libavdevice/avdevice.h"
 #include "Codec/include/libavformat/avformat.h"
 #include "Codec/include/libavutil/avutil.h"
 #include "Codec/include/libavutil/imgutils.h"
 #include "Codec/include/libswscale/swscale.h"
}


class MovieCodec
{
public:

 MovieCodec(const char *filename);

 ~MovieCodec();

 void encodeImage( const cv::Mat& image );

 void close();
 
private :

 void add_stream();

 void openVideoCodec();

 void write_video_frame(const cv::Mat& image);

 void createFrame( const cv::Mat& image );

private:

 static int s_frameCount;

 int m_timeVideo = 0;

 std::string m_filename;

 FILE* m_file;

 AVCodec* m_encoder = NULL;

 AVOutputFormat* m_outputFormat = NULL;

 AVFormatContext* m_formatCtx = NULL;

 AVCodecContext* m_codecCtx = NULL;

 AVStream* m_streamOut = NULL;

 AVFrame* m_frame = NULL;

 AVPacket* m_packet = NULL;

};



.cpp file


#ifndef MOVIECODEC_CPP
#define MOVIECODEC_CPP

#include "moviecodec.h"


#define STREAM_DURATION 5.0
#define STREAM_FRAME_RATE 24
#define STREAM_NB_FRAMES ((int)(STREAM_DURATION * STREAM_FRAME_RATE))
#define STREAM_PIX_FMT AV_PIX_FMT_YUV420P /* default pix_fmt */


static int sws_flags = SWS_BICUBIC;
int MovieCodec::s_frameCount = 0;

MovieCodec::MovieCodec( const char* filename ) :
 m_filename( filename ),
 m_encoder( avcodec_find_encoder( AV_CODEC_ID_H264 ))
{
 av_log_set_level(AV_LOG_VERBOSE);

 int ret(0);

 m_file = fopen( m_filename.c_str(), "wb");

 // allocate the output media context
 ret = avformat_alloc_output_context2( &m_formatCtx, m_outputFormat, NULL, m_filename.c_str());

 if (!m_formatCtx)
 return;

 m_outputFormat = m_formatCtx->oformat;

 // Add the video stream using H264 codec
 add_stream();

 // Open video codec and allocate the necessary encode buffers
 if (m_streamOut)
 openVideoCodec();

 av_dump_format( m_formatCtx, 0, m_filename.c_str(), 1);

 // Open the output media file, if needed
 if (!( m_outputFormat->flags & AVFMT_NOFILE))
 {
 ret = avio_open( &m_formatCtx->pb, m_filename.c_str(), AVIO_FLAG_WRITE);

 if (ret < 0)
 {
 char error[255];
 ret = av_strerror( ret, error, 255);
 fprintf(stderr, "Could not open '%s': %s\n", m_filename.c_str(), error);
 return ;
 }
 }
 else
 {
 return;
 }

 m_formatCtx->flush_packets = 1;

 ret = avformat_write_header( m_formatCtx, NULL );

 if (ret < 0)
 {
 char error[255];
 av_strerror(ret, error, 255);
 fprintf(stderr, "Error occurred when opening output file: %s\n", error);
 return;
 }


 if ( m_frame )
 m_frame->pts = 0;
}



MovieCodec::~MovieCodec()
{
 close();
}



void MovieCodec::encodeImage(const cv::Mat &image)
{
 // Compute video time from last added video frame
 m_timeVideo = (double)m_frame->pts) * av_q2d(m_streamOut->time_base);

 // Stop media if enough time
 if (!m_streamOut /*|| m_timeVideo >= STREAM_DURATION*/)
 return;

 // Add a video frame
 write_video_frame( image );

 // Increase frame pts according to time base
 m_frame->pts += av_rescale_q(1, m_codecCtx->time_base, m_streamOut->time_base);
}


void MovieCodec::close()
{
 int ret( 0 );

 // Write media trailer
// if( m_formatCtx )
// ret = av_write_trailer( m_formatCtx );

 /* flush the encoder */
 ret = avcodec_send_frame(m_codecCtx, NULL);

 /* Close each codec. */
 if ( m_streamOut )
 {
 av_free( m_frame->data[0]);
 av_free( m_frame );
 }

 if (!( m_outputFormat->flags & AVFMT_NOFILE))
 /* Close the output file. */
 ret = avio_close( m_formatCtx->pb);


 /* free the stream */
 avformat_free_context( m_formatCtx );

 fflush( m_file );
}


void MovieCodec::createFrame( const cv::Mat& image )
{
 /**
 * \note allocate frame
 */
 m_frame = av_frame_alloc();
 m_frame->format = STREAM_PIX_FMT;
 m_frame->width = image.cols();
 m_frame->height = image.rows();
 m_frame->pict_type = AV_PICTURE_TYPE_I;
 int ret = av_image_alloc(m_frame->data, m_frame->linesize, m_frame->width, m_frame->height, STREAM_PIX_FMT, 1);

 if (ret < 0)
 {
 return;
 }

 struct SwsContext* sws_ctx = sws_getContext((int)image.cols(), (int)image.rows(), AV_PIX_FMT_RGB24,
 (int)image.cols(), (int)image.rows(), STREAM_PIX_FMT, 0, NULL, NULL, NULL);

 const uint8_t* rgbData[1] = { (uint8_t* )image.getData() };
 int rgbLineSize[1] = { 3 * image.cols() };

 sws_scale(sws_ctx, rgbData, rgbLineSize, 0, image.rows(), m_frame->data, m_frame->linesize);
}


/* Add an output stream. */
void MovieCodec::add_stream()
{
 AVCodecID codecId = AV_CODEC_ID_H264;

 if (!( m_encoder ))
 {
 fprintf(stderr, "Could not find encoder for '%s'\n",
 avcodec_get_name(codecId));
 return;
 }

 // Get the stream for codec
 m_streamOut = avformat_new_stream(m_formatCtx, m_encoder);

 if (!m_streamOut) {
 fprintf(stderr, "Could not allocate stream\n");
 return;
 }

 m_streamOut->id = m_formatCtx->nb_streams - 1;

 m_codecCtx = avcodec_alloc_context3( m_encoder);

 m_streamOut->codecpar->codec_id = codecId;
 m_streamOut->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
 m_streamOut->codecpar->bit_rate = 400000;
 m_streamOut->codecpar->width = 800;
 m_streamOut->codecpar->height = 640;
 m_streamOut->codecpar->format = STREAM_PIX_FMT;
 m_streamOut->codecpar->codec_tag = 0x31637661;
 m_streamOut->codecpar->video_delay = 0;
 m_streamOut->time_base = { 1, STREAM_FRAME_RATE };


 avcodec_parameters_to_context( m_codecCtx, m_streamOut->codecpar);
 
 m_codecCtx->gop_size = 0; /* emit one intra frame every twelve frames at most */
 m_codecCtx->max_b_frames = 0;
 m_codecCtx->time_base = { 1, STREAM_FRAME_RATE };
 m_codecCtx->framerate = { STREAM_FRAME_RATE, 1 };
 m_codecCtx->pix_fmt = STREAM_PIX_FMT;



 if (m_streamOut->codecpar->codec_id == AV_CODEC_ID_H264)
 {
 av_opt_set( m_codecCtx, "preset", "ultrafast", 0 );
 av_opt_set( m_codecCtx, "vprofile", "baseline", 0 );
 av_opt_set( m_codecCtx, "tune", "zerolatency", 0 );
 }

// /* Some formats want stream headers to be separate. */
// if (m_formatCtx->oformat->flags & AVFMT_GLOBALHEADER)
// m_codecCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;


}


void MovieCodec::openVideoCodec()
{
 int ret;

 /* open the codec */
 ret = avcodec_open2(m_codecCtx, m_encoder, NULL);

 if (ret < 0)
 {
 char error[255];
 av_strerror(ret, error, 255);
 fprintf(stderr, "Could not open video codec: %s\n", error);
 }
}



void MovieCodec::write_video_frame( const cv::Mat& image )
{
 int ret;

 createFrame( image );


 if (m_formatCtx->oformat->flags & 0x0020 )
 {
 /* Raw video case - directly store the picture in the packet */
 AVPacket pkt;
 av_init_packet(&pkt);

 pkt.flags |= AV_PKT_FLAG_KEY;
 pkt.stream_index = m_streamOut->index;
 pkt.data = m_frame->data[0];
 pkt.size = sizeof(AVPicture);

// ret = av_interleaved_write_frame(m_formatCtx, &pkt);
 ret = av_write_frame( m_formatCtx, &pkt );
 }
 else
 {
 AVPacket pkt;
 av_init_packet(&pkt);

 /* encode the image */

//cv::Mat yuv420p( m_frame->height + m_frame->height/2, m_frame->width, CV_8UC1, m_frame->data[0]);
//cv::Mat cvmIm;
//cv::cvtColor(yuv420p,cvmIm,CV_YUV420p2BGR);
//cv::imwrite("c:\\tmp\\YUVoriginal.png", cvmIm);

 ret = avcodec_send_frame(m_codecCtx, m_frame);

 if (ret < 0)
 {
 char error[255];
 av_strerror(ret, error, 255);
 fprintf(stderr, "Error encoding video frame: %s\n", error);
 return;
 }

 /* If size is zero, it means the image was buffered. */
// ret = avcodec_receive_packet(m_codecCtx, &pkt);

 do
 {
 ret = avcodec_receive_packet(m_codecCtx, &pkt);

 if (ret == 0)
 {
 ret = av_write_frame( m_formatCtx, &pkt );
 av_packet_unref(&pkt);

 break;
 }
// else if ((ret < 0) && (ret != AVERROR(EAGAIN)))
// {
// return;
// }
// else if (ret == AVERROR(EAGAIN))
// {
// /* flush the encoder */
// ret = avcodec_send_frame(m_codecCtx, NULL);
//
// if (0 > ret)
// return;
// }
 } while (ret == 0);

 if( !ret && pkt.size)
 {
 pkt.stream_index = m_streamOut->index;

 /* Write the compressed frame to the media file. */
// ret = av_interleaved_write_frame(m_formatCtx, &pkt);
 ret = av_write_frame( m_formatCtx, &pkt );
 }
 else
 {
 ret = 0;
 }
 }

 if (ret != 0)
 {
 char error[255];
 av_strerror(ret, error, 255);
 fprintf(stderr, "Error while writing video frame: %s\n", error);
 return;
 }

 s_frameCount++;
}