
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (40)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (6333)
-
How to encode the input images from camera into H.264 stream ?
22 avril 2015, par kuuI’m trying to encode the input images from MacBook Pro’s built-in FaceTime HD Camera into an H.264 video stream in real time using the libx264 on Mac OS X 10.9.5.
Below are the steps I took :
- Get 1280x720 32BGRA images from camera at 15fps using AVFoundation API (AVCaptureDevice class, etc.)
- Convert the images into 320x180 YUV420P format using libswscale.
- Encode the images into an H.264 video stream (baseline profile) using libx264.
I apply the above steps each time the image is obtained from the camera, believing that the encoder keeps track of the encoding state and generates a NAL unit when it’s available.
As I wanted to get the encoded frames while providing the input images to the encoder, I decided to flush the encoder (calling x264_encoder_delayed_frames()) every 30 frames (2 seconds).
However, when I restart the encoding, the encoder stops after a while (x264_encoder_encode() never returns.) I tried changing the number of frames before flushing, but the situation didn’t change.
Below are the related code (I omitted the image capture code because it looks no problem.)
Can you point out anything I might be doing wrong ?
x264_t *encoder;
x264_param_t param;
// Will be called only first time.
int initEncoder() {
int ret;
if ((ret = x264_param_default_preset(&param, "medium", NULL)) < 0) {
return ret;
}
param.i_csp = X264_CSP_I420;
param.i_width = 320;
param.i_height = 180;
param.b_vfr_input = 0;
param.b_repeat_headers = 1;
param.b_annexb = 1;
if ((ret = x264_param_apply_profile(&param, "baseline")) < 0) {
return ret;
}
encoder = x264_encoder_open(&param);
if (!encoder) {
return AVERROR_UNKNOWN;
}
return 0;
}
// Will be called from encodeFrame() defined below.
int convertImage(const enum AVPixelFormat srcFmt, const int srcW, const int srcH, const uint8_t *srcData, const enum AVPixelFormat dstFmt, const int dstW, const int dstH, x264_image_t *dstData) {
struct SwsContext *sws_ctx;
int ret;
int src_linesize[4];
uint8_t *src_data[4];
sws_ctx = sws_getContext(srcW, srcH, srcFmt,
dstW, dstH, dstFmt,
SWS_BILINEAR, NULL, NULL, NULL);
if (!sws_ctx) {
return AVERROR_UNKNOWN;
}
if ((ret = av_image_fill_linesizes(src_linesize, srcFmt, srcW)) < 0) {
sws_freeContext(sws_ctx);
return ret;
}
if ((ret = av_image_fill_pointers(src_data, srcFmt, srcH, (uint8_t *) srcData, src_linesize)) < 0) {
sws_freeContext(sws_ctx);
return ret;
}
sws_scale(sws_ctx, (const uint8_t * const*)src_data, src_linesize, 0, srcH, dstData->plane, dstData->i_stride);
sws_freeContext(sws_ctx);
return 0;
}
// Will be called for each frame.
int encodeFrame(const uint8_t *data, const int width, const int height) {
int ret;
x264_picture_t pic;
x264_picture_t pic_out;
x264_nal_t *nal;
int i_nal;
if ((ret = x264_picture_alloc(&pic, param.i_csp, param.i_width, param.i_height)) < 0) {
return ret;
}
if ((ret = convertImage(AV_PIX_FMT_RGB32, width, height, data, AV_PIX_FMT_YUV420P, 320, 180, &pic.img)) < 0) {
x264_picture_clean(&pic);
return ret;
}
if ((ret = x264_encoder_encode(encoder, &nal, &i_nal, &pic, &pic_out)) < 0) {
x264_picture_clean(&pic);
return ret;
}
if(ret) {
for (int i = 0; i < i_nal; i++) {
printNAL(nal + i);
}
}
x264_picture_clean(&pic);
return 0;
}
// Will be called every 30 frames.
int flushEncoder() {
int ret;
x264_nal_t *nal;
int i_nal;
x264_picture_t pic_out;
/* Flush delayed frames */
while (x264_encoder_delayed_frames(encoder)) {
if ((ret = x264_encoder_encode(encoder, &nal, &i_nal, NULL, &pic_out)) < 0) {
return ret;
}
if (ret) {
for (int j = 0; j < i_nal; j++) {
printNAL(nal + j);
}
}
}
} -
Preserving or syncing audio of original video to video fragments
2 mai 2015, par Code_Ed_StudentI currently have a few videos that I want to split into EXACTLY 30 seconds segments. I have been able to accomplish but the audio is not being properly preserve. Its out of sync. I tried playing
arsample
ab
and other libraries but I am not getting the desired outuput. What would be the best way to both split the videos in exactly 30 second frames and preserve the audio ?ffmpeg -i $file -preset medium -map 0 -segment_time 30 -g 225 -r 25 -sc_threshold 0 -force_key_frames expr:gte(t,n_forced*30) -f segment -movflags faststart -vf scale=-1:720,format=yuv420p -vcodec libx264 -crf 20 -codec:a copy $dir/$video_file-%03d.mp4
short snippet of output
Input #0, flv, from '/media/sf_linux_sandbox/hashtag_pull/video-downloads/5b64d7ab-a669-4016-b55e-fe4720cbd843/5b64d7ab-a669-4016-b55e-fe4720cbd843.flv':
Metadata:
moovPosition : 40
avcprofile : 77
avclevel : 31
aacaot : 2
videoframerate : 30
audiochannels : 2
©too : Lavf56.15.102
length : 7334912
sampletype : mp4a
timescale : 48000
Duration: 00:02:32.84, start: 0.000000, bitrate: 2690 kb/s
Stream #0:0: Video: h264 (Main), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 30.30 fps, 29.97 tbr, 1k tbn, 59.94 tbc
Stream #0:1: Audio: aac (LC), 48000 Hz, stereo, fltp
[libx264 @ 0x3663ba0] using SAR=1/1
[libx264 @ 0x3663ba0] using cpu capabilities: MMX2 SSE2Fast SSSE3 Cache64
[libx264 @ 0x3663ba0] profile High, level 3.1
[libx264 @ 0x3663ba0] 264 - core 144 r2 40bb568 - H.264/MPEG-4 AVC codec - Copyleft 2003-2014 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=3 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=225 keyint_min=22 scenecut=0 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=20.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, segment, to '/media/sf_linux_sandbox/hashtag_pull/video-edits/30/5b64d7ab-a669-4016-b55e-fe4720cbd843/5b64d7ab-a669-4016-b55e-fe4720cbd843-%03d.mp4':
Metadata:
moovPosition : 40
avcprofile : 77
avclevel : 31
aacaot : 2
videoframerate : 30
audiochannels : 2
©too : Lavf56.15.102
length : 7334912
sampletype : mp4a
timescale : 48000
encoder : Lavf56.16.102
Stream #0:0: Video: h264 (libx264), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], q=-1--1, 25 fps, 12800 tbn, 25 tbc
Metadata:
encoder : Lavc56.19.100 libx264
Stream #0:1: Audio: aac, 48000 Hz, stereo
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
Stream #0:1 -> #0:1 (copy) -
Overlay 2 RTSP live stream in sync
2 mai 2015, par perohuI’ve got 2 webcams. I capture the two RTSP h264 stream with FFMPEG and I’m overlaying one of the webcam’s stream on bottom right corner of the other webcam picture.
But the streams are not is sync, there is about 1 sec difference.How can I accomplish this to have both webcams in sync ?
Here is my FFMPEG command :
ffmpeg -rtsp_transport tcp -i "rtsp://xx.xx.xx.xx:555/h264/ch1/sub/av_stream" -rtsp_transport tcp -i "rtsp://xx.xx.xx.xx:554/h264/ch1/sub/av_stream" -i watermark704x400.png -filter_complex "[0]scale=496:288,crop=w=130:h=268:x=208:y=20 [pip]; [1]scale=704:400 [nagy]; [nagy][2] overlay=0:0 [nagy2];[nagy2][pip] overlay=main_w-overlay_w-5:main_h-overlay_h-5,drawtext=fontfile=/root/fonts/courbd.ttf:textfile=TEMP:fontsize=16:fontcolor=white:x=100-tw:y=7:reload=1,drawtext=fontfile=/root/fonts/courbd.ttf:textfile=HUM:fontsize=16:fontcolor=white:x=100-tw:y=40:reload=1,drawtext=fontfile=/root/fonts/courbd.ttf:text='%{localtime\:%Y.%m.%d. %T}':fontsize=16:fontcolor=white:x=60:y=h-51:shadowcolor=black:shadowx=1:shadowy=1,split=3 [a] [b] [c];[c] fps=fps=1/60 [p]" -c:v libx264 -map "[a]" -crf 23 -maxrate 1200k -bufsize 900k -f tee -movflags faststart -flags +global_header -g 100 -r 20 -rtmp_live live -profile:v main -preset:v medium -level 3.1 "/tmp/mp4/vp.feszek-`date +%Y%m%d-%H%M%S`.mp4|[f=flv]rtmp://xx.xx.xx.xx:12345/livewebcam/vp_feszek" -f image2 -y -map "[b]" -update 1 -r 1 -qscale:v 3 current.jpg -f image2 -map "[p]" -bt 20M -qscale:v 3 /tmp/jpg/`date +%Y%m%d-%H%M%S`-video%06d.jpg
Maybe it is a keyframe related issue ? There are keyframes in every second in both source streams.
THX !