
Recherche avancée
Autres articles (40)
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (6515)
-
Error while adding RTSP stream to Ant media server
28 juillet 2021, par JacobI have successfully remotely viewed the stream using ffplay -rtsp_transport tcp rtsp ://streamurl.


ffplay version 4.4 Copyright (c) 2003-2021 the FFmpeg developers
 built with Apple clang version 12.0.5 (clang-1205.0.22.9)
 configuration: --prefix=/opt/homebrew/Cellar/ffmpeg/4.4_2 --enable-shared --enable-pthreads --enable-version3 --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libdav1d --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-avresample --enable-videotoolbox
 libavutil 56. 70.100 / 56. 70.100
 libavcodec 58.134.100 / 58.134.100
 libavformat 58. 76.100 / 58. 76.100
 libavdevice 58. 13.100 / 58. 13.100
 libavfilter 7.110.100 / 7.110.100
 libavresample 4. 0. 0 / 4. 0. 0
 libswscale 5. 9.100 / 5. 9.100
 libswresample 3. 9.100 / 3. 9.100
 libpostproc 55. 9.100 / 55. 9.100
Input #0, rtsp, from 'rtsp://admin:lyne1234@74.65.209.152:554/11/stream1':
 Metadata:
 title : 10
 Duration: N/A, start: 0.050000, bitrate: N/A
 Stream #0:0: Video: hevc (Main), yuvj420p(pc, bt709), 1920x1080, 20 fps, 20 tbr, 90k tbn, 20 tbc
[swscaler @ 0x160330000] deprecated pixel format used, make sure you did set range correctly
 5.69 M-V: 0.014 fd= 0 aq= 0KB vq= 97KB sq= 0B f=0/0 



However, when playing the stream the a VCL player or trying to add the stream to my AWS instance of Ant media server. The stream does not go through. On ant media server there is no errors the stream connects but its just a black screen. On the VCL player I am getting the error :


VLC is unable to open the MRL 'rtsp://{streamurl}'



I currently have an identical camera setup the exact same way working fine. The only difference is there on different LAN's with different ISP's.


-
ffmpeg creating mpeg-dash chunk files too slowly resulting in 404 errors
17 juillet 2021, par DannyI have a hardware encoder feeding FFmpeg to create a MPEG-DASH Low Latency stream. It works well for a while, but after letting FFmpeg run for a while and reloading the page there are many 404 errors.


When that happens, the
dash.js
player tries to fetch the segment file on the "live edge" but the file has not been created yet by FFmpeg. For example, after running for 20-30 minutes and loading the web page player, debug code in the web server shows :

2021-07-16 16:46:30.64 : GET REQUEST : /data/ott/chunk-stream0-00702.m4s
2021-07-16 16:46:30.67 : NOT FOUND. Latest files on filesystem:
 chunk-stream0-00699.m4s.tmp
 chunk-stream0-00698.m4s
 chunk-stream0-00697.m4s
 chunk-stream0-00696.m4s
 ...



So you can see the browser requested chunk 702 but the latest on the server is (part of) 699. With 2 second chunks, that is 3-5 seconds of content not yet available.


To analyze, I modified FFmpeg's
dashenc.c
to add a timestamp every time a file is opened which displays like :

[dash @ 0x9b17c0] 21:48:52.935 1626443332.935 : dashenc_io_open() - opened /data/ott/chunk-stream0-00060.m4s.tmp



And loaded the timestamps into Excel.


Despite a segment duration of 2.000 seconds, the average time between file opens is 2.011 seconds. Over two hours this accumulated to a 45 second difference between the calculated live edge and the latest file on the server.


The HW encoder is set to 25 fps and a GOP size of 5. I've confirmed both by analyzing the H.264 NALUs output by the HW encoder.


My Question : Is this a bug in FFmpeg or can I avoid this problem by adjusting the settings of either the HW encoder and/or FFmpeg options ?


REFERENCE


FFmpeg: Version 4.4 
Centos 8 
Apache 2.4.37



FFmpeg command line (pipe is fed by process reading HW encoder)


ffmpeg -re -loglevel verbose -an -f h264 -i pipe:17 -c:v copy \
-f dash -dash_segment_type mp4 -b:v 1000000 -seg_duration 2.000000 \
-frag_type duration -frag_duration 0.200000 -target_latency 1 \
-window_size 10 -extra_window_size 5 -remove_at_exit 1 -streaming 1 \
-ldash 1 -use_template 1 -use_timeline 0 -write_prft 1 -avioflags direct \
-fflags +nobuffer+flush_packets -format_options movflags=+cmaf \
-utc_timing_url /web/be/time.php /data/ott/master.mpd



Modified
dash_io_open()
from dashenc.c

static int 
dashenc_io_open(AVFormatContext *s, AVIOContext **pb, char *filename, AVDictionary **options)
{
 DASHContext *c = s->priv_data;
 int http_base_proto = filename ? ff_is_http_proto(filename) : 0;
 int err = AVERROR_MUXER_NOT_FOUND;
 if (!*pb || !http_base_proto || !c->http_persistent)
 {
 err = s->io_open(s, pb, filename, AVIO_FLAG_WRITE, options);

 // My Debug
 {
 char buf[20], milli[60];
 struct timeb tp;

 ftime(&tp); // sec + ms
 struct tm *tmInfo = localtime(&tp.time);

 // 2020-05-15 21:15:12.123
 strftime(buf, sizeof(buf), "%H:%M:%S", tmInfo);
 snprintf(milli, 59, "%s.%03d %d.%03d ", buf, tp.millitm, tp.time, tp.millitm);

 av_log(s, AV_LOG_INFO, "%s : dashenc_io_open() - opened %s\n", milli, filename);
 }
 }
 return err;
}



-
ffmpeg : nvidia gpu performance sub-optimal
3 août 2021, par david furstthe problem seems fairly basic : i'd like to create thumbnails from incoming video in the shortest time possible, and i'm trying to do this by offloading processing to an nvidia gpu.


while i run ffmpeg, i'm monitoring the gpu usage with the nvidia-smi utility. gpu usage never goes above 15% and the amount of time to encode the thumbnails with gpu is only 10% less than the time required without the gpu. these performance levels are very disappointing.


my question : am i going about this the wrong way (and if so, how should i go about it), or is this gpu performance 'normal'/'reasonable' ?


SYSTEM INFORMATION


the machine is a desktop pc running windows 10, 8gb ram, intel i7-7700. the gpu is an nvidia quadro pro 4000 with cuda 11.4 installed. ffmpeg is version N-101372-gb5cb8c8767-g2fc309e699+4 (2021) running under mingw, with
--enable-cuda --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-libnpp --enable-nvdec
and--enable-nvenc
.

a typical ffmpeg command line i've used is :


1 ffmpeg -hide_banner \
 2 -init_hw_device cuda=cuda:0 -filter_hw_device cuda \
 3 -hwaccel_output_format cuda \
 4 -i "$infile" \
 5 -vf "hwupload_cuda,scale_npp=w=200:h=150:format=yuv420p:interp_algo=lanczos,fps=1/1,hwdownload,format=yuv420p" \
 6 -y "$outdir/%08d.png"



i've varied the above by supplementing some cuda-related parameters according to posts i've read here on stackoverflow and on the nvidia transcoding guide, but haven't been able to improve performance. adding any of
-hwaccel cuda, -hwaccel cuvid, -hwaccel nvenc
at the beginning of line 3 results in the error :
Impossible to convert between the formats supported by the filter 'graph 0 input from stream 0:0' and the filter 'auto_scaler_0'


any pointers appreciated.