
Recherche avancée
Autres articles (67)
-
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
L’utiliser, en parler, le critiquer
10 avril 2011La première attitude à adopter est d’en parler, soit directement avec les personnes impliquées dans son développement, soit autour de vous pour convaincre de nouvelles personnes à l’utiliser.
Plus la communauté sera nombreuse et plus les évolutions seront rapides ...
Une liste de discussion est disponible pour tout échange entre utilisateurs.
Sur d’autres sites (11068)
-
Fill output video with black screen in case of missing the input stream or switch input UDP stream to another source
13 mai 2018, par Omar MahmoudI am using FFMPEG to record a streams from UDP sources, but unfortunately once the input stream missed, the ffmpeg stop recording, then append video to file once the stream observed by the server again.
And as these recording files is time based so, I need fill video with black screen in case of missing the input stream.
ffmpeg -i ’udp ://224.12.12.1:4000’ -t 00:45:00 -vcodec copy -acodec copy -f mpegts /record/eEs1526947.ts
How can I do this, either by ffmpeg or other CLI based service/process ?
-
FFmpeg extracts black image from H264 stream
8 juin 2022, par massivemoistureI have a C# application that receives H264 stream through a socket. I want to continuously get the latest image from that stream.


Here's what I did with FFmpeg 5.0.1, just a rough sample to get ONE latest image, how I start FFmpeg :


var ffmpegInfo = new ProcessStartInfo(FFMPEG_PATH);
ffmpegInfo.RedirectStandardInput = true;
ffmpegInfo.RedirectStandardOutput = true;
ffmpegInfo.RedirectStandardError = true;
ffmpegInfo.UseShellExecute = false;

ffmpegInfo.Arguments = "-i pipe: -f h264 -pix_fmt bgr24 -an -sn pipe:";

ffmpegInfo.UseShellExecute = false;
ffmpegInfo.CreateNoWindow = true;

Process myFFmpeg = new Process();
myFFmpeg.StartInfo = ffmpegInfo;
myFFmpeg.EnableRaisingEvents = true;
myFFmpeg.Start();

var inStream = myFFmpeg.StandardInput.BaseStream;
FileStream baseStream = myFFmpeg.StandardOutput.BaseStream as FileStream;
myFFmpeg.BeginErrorReadLine();



Then I start a new thread to receive the stream through socket :


// inStream is "myFFmpeg.StandardInput.BaseStream" from the code block above
var t = Task.Run(() => ReceiveStream(inStream));



Next I read the output from FFmpeg :


byte[] decoded = new byte[Width * Height * 3];
int numBytesToRead = Width * Height * 3;
int numBytesRead = 0;

while (numBytesToRead > 0)
{
 int n = baseStream.Read(decoded, 0, decoded.Length);
 Console.WriteLine($"Read {n} bytes");
 if (n == 0)
 {
 break;
 } 
 numBytesRead += n;
 numBytesToRead -= n;
}



Lastly, I use ImageSharp library to save
decoded
byte array as a jpeg file.

image.Save("test.jpeg", encoder);



However,
test.jpeg
always comes out as a black image. What did I do wrong ?
Here's the stderr log that I got from ffmpeg :


 Duration: N/A, bitrate: N/A
 Stream #0:0: Video: h264 (High), yuv420p(tv, smpte170m/bt470bg/smpte170m, progressive), 1080x2256, 25 fps, 25 tbr, 1200k tbn
Stream mapping:
 Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
Incompatible pixel format 'bgr24' for codec 'libx264', auto-selecting format 'yuv444p'
[libx264 @ 0x11b9068f0] using cpu capabilities: ARMv8 NEON
[libx264 @ 0x11b9068f0] profile High 4:4:4 Predictive, level 5.0, 4:4:4, 8-bit
Output #0, h264, to 'pipe:':
 Metadata:
 encoder : Lavf59.16.100
 Stream #0:0: Video: h264, yuv444p(tv, smpte170m/bt470bg/smpte170m, progressive), 1080x2256, q=2-31, 25 fps, 25 tbn
 Metadata:
 encoder : Lavc59.18.100 libx264
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
frame= 1 fps=0.0 q=0.0 size= 0kB time=00:00:00.00 bitrate=N/A speed= 0x 
frame= 56 fps=0.0 q=0.0 size= 0kB time=00:00:00.00 bitrate=N/A speed= 0x
frame= 87 fps= 83 q=28.0 size= 370kB time=00:00:01.16 bitrate=2610.6kbits/s speed=1.11x
frame= 118 fps= 75 q=28.0 size= 698kB time=00:00:02.40 bitrate=2381.4kbits/s speed=1.54x
frame= 154 fps= 75 q=28.0 size= 1083kB time=00:00:03.84 bitrate=2311.1kbits/s speed=1.86x
...



Thank you !


Edit : as suggested by @kesh, I have changed
h264
torawvideo
, the arguments now are :-i pipe: -f rawvideo -pix_fmt bgr24 -an -sn pipe:


Here's the output of ffmpeg :


Input #0, h264, from 'pipe:':
 Duration: N/A, bitrate: N/A
 Stream #0:0: Video: h264 (High), yuv420p(tv, smpte170m/bt470bg/smpte170m, progressive), 1080x2256, 25 fps, 25 tbr, 1200k tbn
Stream mapping:
 Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
// About 9 of these "No accelerated colorspace..." message
[swscaler @ 0x128690000] [swscaler @ 0x1286a0000] No accelerated colorspace conversion found from yuv420p to bgr24.
Output #0, rawvideo, to 'pipe:':
 Metadata:
 encoder : Lavf59.16.100
 Stream #0:0: Video: rawvideo (BGR[24] / 0x18524742), bgr24(pc, gbr/bt470bg/smpte170m, progressive), 1080x2256, q=2-31, 1461888 kb/s, 25 fps, 25 tbn
 Metadata:
 encoder : Lavc59.18.100 rawvideo
frame= 1 fps=0.0 q=0.0 size= 0kB time=00:00:00.00 bitrate=N/A speed= 0x
// FFmpeg outputs no more log after this



-
Output black when I decode h264 720p with ffmpeg
6 décembre 2017, par José Marqueses SaxoFirst, sorry for my english. When I decode h264 720p in ardrone2.0 my output is black and I cant see anything.
I have try to change the value of
pCodecCtx->pix_fmt = AV_PIX_FMT_BGR24;
topCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P;
and the value ofpCodecCtxH264->pix_fmt = AV_PIX_FMT_BGR24;
topCodecCtxH264->pix_fmt = AV_PIX_FMT_YUV420P;
but my program crash. What am I doing wrong ?. Thank you, see part of my code :av_register_all();
avcodec_register_all();
avformat_network_init();
// 1.2. Open video file
if(avformat_open_input(&pFormatCtx, drone_addr, NULL, NULL) != 0) {
mexPrintf("No conecct with Drone");
EndVideo();
return;
}
pCodec = avcodec_find_decoder(AV_CODEC_ID_H264);
pCodecCtx = avcodec_alloc_context3(pCodec);
pCodecCtx->pix_fmt = AV_PIX_FMT_BGR24;
pCodecCtx->skip_frame = AVDISCARD_DEFAULT;
pCodecCtx->error_concealment = FF_EC_GUESS_MVS | FF_EC_DEBLOCK;
pCodecCtx->err_recognition = AV_EF_CAREFUL;
pCodecCtx->skip_loop_filter = AVDISCARD_DEFAULT;
pCodecCtx->workaround_bugs = FF_BUG_AUTODETECT;
pCodecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
pCodecCtx->codec_id = AV_CODEC_ID_H264;
pCodecCtx->skip_idct = AVDISCARD_DEFAULT;
pCodecCtx->width = 1280;
pCodecCtx->height = 720;
pCodecH264 = avcodec_find_decoder(AV_CODEC_ID_H264);
pCodecCtxH264 = avcodec_alloc_context3(pCodecH264);
pCodecCtxH264->pix_fmt = AV_PIX_FMT_BGR24;
pCodecCtxH264->skip_frame = AVDISCARD_DEFAULT;
pCodecCtxH264->error_concealment = FF_EC_GUESS_MVS | FF_EC_DEBLOCK;
pCodecCtxH264->err_recognition = AV_EF_CAREFUL;
pCodecCtxH264->skip_loop_filter = AVDISCARD_DEFAULT;
pCodecCtxH264->workaround_bugs = FF_BUG_AUTODETECT;
pCodecCtxH264->codec_type = AVMEDIA_TYPE_VIDEO;
pCodecCtxH264->codec_id = AV_CODEC_ID_H264;
pCodecCtxH264->skip_idct = AVDISCARD_DEFAULT;
if(avcodec_open2(pCodecCtxH264, pCodecH264, &optionsDict) < 0)
{
mexPrintf("Error opening H264 codec");
return ;
}
pFrame_BGR24 = av_frame_alloc();
if(pFrame_BGR24 == NULL) {
mexPrintf("Could not allocate pFrame_BGR24\n");
return ;
}
// Determine required buffer size and allocate buffer
buffer_BGR24 =
(uint8_t *)av_mallocz(av_image_get_buffer_size(AV_PIX_FMT_BGR24,
pCodecCtx->width, ((pCodecCtx->height == 720) ? 720 : pCodecCtx->height) *
sizeof(uint8_t)*3,1));
// Assign buffer to image planes
av_image_fill_arrays(pFrame_BGR24->data, pFrame_BGR24->linesize,
buffer_BGR24,AV_PIX_FMT_BGR24, pCodecCtx->width, pCodecCtx->height,1);
// format conversion context
pConvertCtx_BGR24 = sws_getContext(pCodecCtx->width, pCodecCtx->height,
pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, AV_PIX_FMT_BGR24,
SWS_BILINEAR | SWS_ACCURATE_RND, 0, 0, 0);
// 1.6. get video frames
pFrame = av_frame_alloc();
av_init_packet(&packet);
packet.data = NULL;
packet.size = 0;
}
//Captura un frame
void video::capture(mxArray *plhs[]) {
if(av_read_frame(pFormatCtx, &packet) < 0){
mexPrintf("Error al leer frame");
return;
}
do {
do {
rest = avcodec_send_packet(pCodecCtxH264, &packet);
} while(rest == AVERROR(EAGAIN));
if(rest == AVERROR_EOF || rest == AVERROR(EINVAL)) {
printf("AVERROR(EAGAIN): %d, AVERROR_EOF: %d,
AVERROR(EINVAL): %d\n", AVERROR(EAGAIN), AVERROR_EOF,
AVERROR(EINVAL));
printf("fe_read_frame: Frame getting error (%d)!\n", rest);
return;
}
rest = avcodec_receive_frame(pCodecCtxH264, pFrame);
} while(rest == AVERROR(EAGAIN));
if(rest == AVERROR_EOF || rest == AVERROR(EINVAL)) {
// An error or EOF occured,index break out and return what
// we have so far.
printf("AVERROR(EAGAIN): %d, AVERROR_EOF: %d, AVERROR(EINVAL): %d\n",
AVERROR(EAGAIN), AVERROR_EOF, AVERROR(EINVAL));
printf("fe_read_frame: EOF or some othere decoding error (%d)!\n",
rest);
return;
}
// 2.1.1. convert frame to GRAYSCALE [or BGR] for OpenCV
sws_scale(pConvertCtx_BGR24, (const uint8_t* const*)pFrame->data,
pFrame->linesize, 0,pCodecCtx->height, pFrame_BGR24->data,
pFrame_BGR24->linesize);
//}
av_packet_unref(&packet);
av_init_packet(&packet);
mwSize dims[] = {(pCodecCtx->width)*((pCodecCtx->height == 720) ? 720 :
pCodecCtx->height)*sizeof(uint8_t)*3};
plhs[0] = mxCreateNumericArray(1,dims,mxUINT8_CLASS, mxREAL);
//plhs[0]=mxCreateDoubleMatrix(pCodecCtx->height,pCodecCtx-
>width,mxREAL);
point=mxGetPr(plhs[0]);
memcpy(point, pFrame_BGR24->data[0],(pCodecCtx->width)*(pCodecCtx-
>height)*sizeof(uint8_t)*3);
}