
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (45)
-
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...) -
Le plugin : Podcasts.
14 juillet 2010, parLe problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
Types de fichiers supportés dans les flux
Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...) -
Les images
15 mai 2013
Sur d’autres sites (4714)
-
VLC - Could someone assist me into improving latency in streaming to web based app ?
19 janvier 2017, par zyeekI have been looking for solutions in which I can stream an IP camera’s stream to
HTML 5
. Currently as is it doesn’t supportRTSP
so easily.I am trying to be able to view the camera’s stream as live as possible. I was hoping someone could help me achieve that. I have been playing with it to get something workable, but at the moment I get a 5s delay stream. It is smooth, but wish to get it hopefully within <1-2s delay if possible.
My current setup goes from taking my IP camera’s stream in
RTSP
and converting it to awebm
and streaming it to a url, which then I plan on using that to put else where in a web app.What I would like to achieve
Use a protocol that has low latency with audio was well. Webm was used as test, but I can’t seem to get other commands to get the proper stream to be going.
I would like to use DASH, but from reading
FFMPEG
currently doesn’t support it. I was thinking maybeRTMP
would be good enough for now, being both low latency andHTTP 5
compatible. I am just unable to figure out how to getFFMPEG
to transcode theRTSP
toRTMP
.SETUP :
I am using ffserver and ffmpeg. Overall scope : trying to pull IP camera stream and put it on a web app.
Framework I am use is Meteor JS. So, I am trying to few plugins or outside complex setups as I want to be able to deploy this Meteor app on mobile devices as well. So, I want to stay within the boundaries of what
HTML 5
can support.My current ffserver setup is
ffserver.conf
(this was taking from bunch of different place :HTTPPort 8090 # Port to bind the server to
HTTPBindAddress 0.0.0.0
MaxHTTPConnections 2000
MaxClients 1000
MaxBandwidth 10000 # Maximum bandwidth per client
# set this high enough to exceed stream bitrate
CustomLog -
<feed>
File /tmp/feed.ffm
FileMaxSize 100K
ACL allow 127.0.0.1
</feed>
<stream>
Format webm
Feed feed.ffm
NoAudio
VideoCodec libvpx
VideoFrameRate 24
VideoBitRate 1024
VideoSize 480x270
VideoBufferSize 1024
AVOptionVideo flags +global_header
StartSendOnKey
</stream>
<stream> # Server status URL
Format status
# Only allow local people to get the status
ACL allow localhost
</stream>
<redirect> # Just an URL redirect for index
# Redirect index.html to the appropriate site
URL url/
</redirect>Works normally :
ffserver version 3.2.2 Copyright (c) 2000-2016 the FFmpeg developers
built with Apple LLVM version 5.1 (clang-503.0.40) (based on LLVM 3.4svn)
configuration: --prefix=/usr/local/Cellar/ffmpeg/3.2.2 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-frei0r --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-opencl --disable-lzma --enable-libopenjpeg --disable-decoder=jpeg2000 --extra-cflags=-I/usr/local/Cellar/openjpeg/2.1.2/include/openjpeg-2.1 --enable-nonfree --enable-vda
libavutil 55. 34.100 / 55. 34.100
libavcodec 57. 64.101 / 57. 64.101
libavformat 57. 56.100 / 57. 56.100
libavdevice 57. 1.100 / 57. 1.100
libavfilter 6. 65.100 / 6. 65.100
libavresample 3. 1. 0 / 3. 1. 0
libswscale 4. 2.100 / 4. 2.100
libswresample 2. 3.100 / 2. 3.100
libpostproc 54. 1.100 / 54. 1.100
/etc/ffserver.conf:27: Setting default value for video bit rate tolerance = 256000. Use NoDefaults to disable it.
/etc/ffserver.conf:27: Setting default value for video rate control equation = tex^qComp. Use NoDefaults to disable it.
/etc/ffserver.conf:27: Setting default value for video max rate = 2048000. Use NoDefaults to disable it.
Wed Jan 18 17:04:30 2017 FFserver started.Now I give life to the feed with
ffmpeg
. Command I use :ffmpeg -vsync 2 -i rtsp://admin:password@192.168.2.165:88/videoMain -map 0 http://localhost:8090/feed.ffm
which gives the result :
ffmpeg version 3.2.2 Copyright (c) 2000-2016 the FFmpeg developers
built with Apple LLVM version 5.1 (clang-503.0.40) (based on LLVM 3.4svn)
configuration: --prefix=/usr/local/Cellar/ffmpeg/3.2.2 --enable-shared --enable-pthreads --enable-gpl --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --enable-ffplay --enable-frei0r --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxvid --enable-opencl --disable-lzma --enable-libopenjpeg --disable-decoder=jpeg2000 --extra-cflags=-I/usr/local/Cellar/openjpeg/2.1.2/include/openjpeg-2.1 --enable-nonfree --enable-vda
libavutil 55. 34.100 / 55. 34.100
libavcodec 57. 64.101 / 57. 64.101
libavformat 57. 56.100 / 57. 56.100
libavdevice 57. 1.100 / 57. 1.100
libavfilter 6. 65.100 / 6. 65.100
libavresample 3. 1. 0 / 3. 1. 0
libswscale 4. 2.100 / 4. 2.100
libswresample 2. 3.100 / 2. 3.100
libpostproc 54. 1.100 / 54. 1.100
Guessed Channel Layout for Input Stream #0.1 : mono
Input #0, rtsp, from 'rtsp://admin:password@192.168.2.165:88/videoMain':
Metadata:
title : IP Camera Video
comment : videoMain
Duration: N/A, start: 0.000000, bitrate: N/A
Stream #0:0: Video: h264 (Main), yuv420p(progressive), 1280x720, 90k tbr, 90k tbn, 180k tbc
Stream #0:1: Audio: pcm_mulaw, 8000 Hz, mono, s16, 64 kb/s
[libvpx @ 0x7fd58184a600] v1.6.0
Output #0, ffm, to 'http://localhost:8090/feed.ffm':
Metadata:
title : IP Camera Video
comment : videoMain
creation_time : now
encoder : Lavf57.56.100
Stream #0:0: Video: vp8 (libvpx), yuv420p, 480x270, q=-1--1, 1024 kb/s, 90k fps, 1000k tbn, 24 tbc
Metadata:
encoder : Lavc57.64.101 libvpx
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 8388608 vbv_delay: -1
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> vp8 (libvpx))
Press [q] to stop, [?] for help
[rtsp @ 0x7fd581000000] max delay reached. need to consume packet
[rtsp @ 0x7fd581000000] RTP: missed 5 packets
[h264 @ 0x7fd5818ae800] Increasing reorder buffer to 1
frame= 139 fps= 18 q=0.0 Lsize= 440kB time=00:00:09.25 bitrate= 389.7kbits/s speed=1.19x
video:429kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 2.663893% -
opus : add a native Opus encoder
11 février 2017, par Rostislav Pehlivanovopus : add a native Opus encoder
This marks the first time anyone has written an Opus encoder without
using any libopus code. The aim of the encoder is to prove how far
the format can go by writing the craziest encoder for it.Right now the encoder’s basic, it only supports CBR encoding, however
internally every single feature the CELT layer has is implemented
(except the pitch pre-filter which needs to work well with the rest of
whatever gets implemented). Psychoacoustic and rate control systems are
under development.The encoder takes in frames of 120 samples and depending on the value of
opus_delay the plan is to use the extra buffered frames as lookahead.
Right now the encoder will pick the nearest largest legal frame size and
won’t use the lookahead, but that’ll change once there’s a
psychoacoustic system.Even though its a pretty basic encoder its already outperforming
any other native encoder FFmpeg has by a huge amount.The PVQ search algorithm is faster and more accurate than libopus’s
algorithm so the encoder’s performance is close to that of libopus
at zero complexity (libopus has more SIMD).
The algorithm might be ported to libopus or other codecs using PVQ in
the future.The encoder still has a few minor bugs, like desyncs at ultra low
bitrates (below 9kbps with 20ms frames).Signed-off-by : Rostislav Pehlivanov <atomnuker@gmail.com>
-
How to convert VP8 track with different frame resolution to h264
13 septembre 2016, par NikitaI have a .webm file with VP8 track, recorded from WebRTC stream by external service (TokBox Archiving). The stream is adaptive, so each frame in track could have different resolution. Most players (in webkit browsers) use video resolution from track description (which is always 640x480) and scale frames to this resolution. Firefox and VLC player uses real frame resolution, changing video resolution respectively.
I want to achieve 2 goals :
- play this video in Internet Explorer 9+ without additional plugin installation.
- change frames resolution to one fixed resolution, so the video will look identically in different browsers.
So, my plan is :
- extract frames from source webm file to images with real frame resolution (e.g. PNG or BMP) (how could I do that ?)
- find max width and max height of images
- add black padding to images, so smaller frames will be in the center of a new frame (of size MAX_WIDHTxMAX_HEIGHT)
- combine images to h264 track using ffmpeg
Is all correct ? How can I achieve this ? Can this algorithm be optimized some way ?
I tried ffmpeg to extract images, but it does not parse real frame resolution, using resolution from track header.
I think some libwebm functions can help me (to parse frame headers and extract images). Maybe someone has some code snippets to do this ?Example .webm (download source, do not play google-converted version) : https://drive.google.com/file/d/0BwFZRvYNn9CKcndhMzlVa0psX00/view?usp=sharing
Official description of adaptive stream from TokBox support : https://support.tokbox.com/hc/en-us/community/posts/206241666-Archived-video-resolution-is-supposed-to-be-720x1280-but-reports-as-640x480