
Recherche avancée
Autres articles (25)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Keeping control of your media in your hands
13 avril 2011, parThe vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)
Sur d’autres sites (3771)
-
ffserver leave original stream size
28 novembre 2014, par ihnatkukHope you guys will help me, because I have got stuck and can’t find solution for this problem by myself.
I am trying to stream video from webcam to users using ffmpeg+ffserver. But I have faced with a problem :ffmpeg gets stream from camera and pushes it to feed of ffserver:
ffmpeg -rtsp_transport tcp -i rtsp://admin:admin@192.168.10.76:80 -y -vcodec libvpx http://127.0.0.1:8090/1.ffmffserver stream options :
<stream>
Feed 1.ffm
Format webm
NoAudio
#VideoCodec libvpx
#VideoSize 480x320
VideoFrameRate 24
AVOptionVideo flags +global_header
AVOptionVideo cpu-used 0
AVOptionVideo qmin 1
AVOptionVideo qmax 31
AVOptionVideo quality good
PreRoll 0
StartSendOnKey
VideoBitRate 128
</stream>(note, videoSize option is commented). But even with default VideoSize (160x128), ffserver doesn’t respond for each request. Browser always gets :
HTTP/1.0 200 OK
Pragma: no-cache
Content-Type: video/webmBut sometimes video content is not sent.
If I uncomment VideoSize option - the same problem but much less successfull requests comparing with default video size.
ffserver log looks regular with no errors. But as you can see that sometimes it sends only headers to client :
Thu Nov 27 12:49:11 2014 127.0.0.1 - - [POST] "/1.ffm HTTP/1.1" 200 459
Thu Nov 27 12:49:25 2014 127.0.0.1 - - [POST] "/1.ffm HTTP/1.1" 200 459
Thu Nov 27 12:49:36 2014 127.0.0.1 - - [POST] "/1.ffm HTTP/1.1" 200 459
Thu Nov 27 12:50:52 2014 127.0.0.1 - - [POST] "/1.ffm HTTP/1.1" 200 459
Thu Nov 27 12:53:54 2014 127.0.0.1 - - [POST] "/1.ffm HTTP/1.1" 200 459
Thu Nov 27 13:30:19 2014 127.0.0.1 - - [GET] "/1.ffm HTTP/1.1" 200 4175
Thu Nov 27 13:30:34 2014 127.0.0.1 - - [GET] "/1.webm HTTP/1.1" 200 385731
Thu Nov 27 13:30:34 2014 127.0.0.1 - - [POST] "/1.ffm HTTP/1.1" 200 458752
Thu Nov 27 13:30:36 2014 127.0.0.1 - - [GET] "/1.ffm HTTP/1.1" 200 4175
Thu Nov 27 13:30:58 2014 127.0.0.1 - - [GET] "/1.webm HTTP/1.1" 200 493
Thu Nov 27 13:30:58 2014 127.0.0.1 - - [POST] "/1.ffm HTTP/1.1" 200 622592Does anybody know what could it be ? Actually I need to save original VideoSize for stream. I am trying to override ffserver stream options with ffmpeg using the command (passing the same parameters as in ffserver’s stream) :
ffmpeg -re -override_ffserver -rtsp_transport tcp -i rtsp://admin:admin@192.168.10.76:80 -an -r 24 -qmin 1 -qmax 31 -cpu-used 0 -quality good -flags:v +global_header -b:v 128 -vcodec libvpx -f webm -y http://127.0.0.1:8090/1.ffm
But at the momment I still have error message ’Output file is empty, nothing was encoded’. Here is ffmpeg’s output :
ffmpeg version 2.4.2 Copyright (c) 2000-2014 the FFmpeg developers
built on Oct 6 2014 17:33:05 with gcc 4.8 (Ubuntu 4.8.2-19ubuntu1)
configuration: --prefix=/opt/ffmpeg --libdir=/opt/ffmpeg/lib/ --enable-shared --enable-avresample --disable-stripping --enable-gpl --enable-version3 --enable-runtime-cpudetect --build-suffix=.ffmpeg --enable-postproc --enable-x11grab --enable-libcdio --enable-vaapi --enable-vdpau --enable-bzlib --enable-gnutls --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libfaac --enable-libvo-aacenc --enable-nonfree --enable-libmp3lame --enable-libx264 --enable-libx265 --enable-libxvid --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libfdk_aac --enable-libopus --enable-pthreads --enable-zlib --enable-libvpx --enable-libfreetype --enable-libpulse --enable-debug=3
libavutil 54. 7.100 / 54. 7.100
libavcodec 56. 1.100 / 56. 1.100
libavformat 56. 4.101 / 56. 4.101
libavdevice 56. 0.100 / 56. 0.100
libavfilter 5. 1.100 / 5. 1.100
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 0.100 / 3. 0.100
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 0.100 / 53. 0.100
Guessed Channel Layout for Input Stream #0.1 : mono
Input #0, rtsp, from 'rtsp://admin:admin@192.168.10.76:80':
Metadata:
title : RTSP Session/2.0
Duration: N/A, start: 0.000000, bitrate: 128 kb/s
Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709), 1280x720 [SAR 1:1 DAR 16:9], 25 fps, 100 tbr, 90k tbn, 50 tbc
Stream #0:1: Audio: pcm_alaw, 16000 Hz, 1 channels, s16, 128 kb/s
[swscaler @ 0x197f7a0] deprecated pixel format used, make sure you did set range correctly
[libvpx @ 0x1a0c080] Bitrate 128 is extremely low, maybe you mean 128k
[libvpx @ 0x1a0c080] v1.3.0
The bitrate parameter is set too low. It takes bits/s as argument, not kbits/s
Output #0, webm, to 'http://127.0.0.1:8090/1.ffm':
Metadata:
title : RTSP Session/2.0
encoder : Lavf56.4.101
Stream #0:0: Video: vp8 (libvpx), yuv420p, 480x320 [SAR 32:27 DAR 16:9], q=1-31, 0 kb/s, 24 fps, 1k tbn, 24 tbc
Metadata:
encoder : Lavc56.1.100 libvpx
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> vp8 (libvpx))
Press [q] to stop, [?] for help
frame= 33 fps= 22 q=0.0 size= 0kB time=00:00:00.00 bitrate=N/A dup=0 droframe= 43 fps= 22 q=0.0 Lsize= 0kB time=00:00:00.00 bitrate=N/A dup=0 drop=1
video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Output file is empty, nothing was encoded (check -ss / -t / -frames parameters if used)
Received signal 2: terminating.Thanks in advance.
-
Progress with rtc.io
12 août 2014, par silviaAt the end of July, I gave a presentation about WebRTC and rtc.io at the WDCNZ Web Dev Conference in beautiful Wellington, NZ.
Putting that talk together reminded me about how far we have come in the last year both with the progress of WebRTC, its standards and browser implementations, as well as with our own small team at NICTA and our rtc.io WebRTC toolbox.
One of the most exciting opportunities is still under-exploited : the data channel. When I talked about the above slide and pointed out Bananabread, PeerCDN, Copay, PubNub and also later WebTorrent, that’s where I really started to get Web Developers excited about WebRTC. They can totally see the shift in paradigm to peer-to-peer applications away from the Server-based architecture of the current Web.
Many were also excited to learn more about rtc.io, our own npm nodules based approach to a JavaScript API for WebRTC.
We believe that the World of JavaScript has reached a critical stage where we can no longer code by copy-and-paste of JavaScript snippets from all over the Web universe. We need a more structured module reuse approach to JavaScript. Node with JavaScript on the back end really only motivated this development. However, we’ve needed it for a long time on the front end, too. One big library (jquery anyone ?) that does everything that anyone could ever need on the front-end isn’t going to work any longer with the amount of functionality that we now expect Web applications to support. Just look at the insane growth of npm compared to other module collections :
Packages per day across popular platforms (Shamelessly copied from : http://blog.nodejitsu.com/npm-innovation-through-modularity/) For those that – like myself – found it difficult to understand how to tap into the sheer power of npm modules as a font end developer, simply use browserify. npm modules are prepared following the CommonJS module definition spec. Browserify works natively with that and “compiles” all the dependencies of a npm modules into a single bundle.js file that you can use on the front end through a script tag as you would in plain HTML. You can learn more about browserify and module definitions and how to use browserify.
For those of you not quite ready to dive in with browserify we have prepared prepared the rtc module, which exposes the most commonly used packages of rtc.io through an “RTC” object from a browserified JavaScript file. You can also directly download the JavaScript file from GitHub.
Using rtc.io rtc JS library So, I hope you enjoy rtc.io and I hope you enjoy my slides and large collection of interesting links inside the deck, and of course : enjoy WebRTC ! Thanks to Damon, JEeff, Cathy, Pete and Nathan – you’re an awesome team !
On a side note, I was really excited to meet the author of browserify, James Halliday (@substack) at WDCNZ, whose talk on “building your own tools” seemed to take me back to the times where everything was done on the command-line. I think James is using Node and the Web in a way that would appeal to a Linux Kernel developer. Fascinating !!
-
which ffmpeg api is for the cmd option "-q:v 1" by saving jpg from video [duplicate]
3 mars 2021, par Tangtooas the theme, how can I set quality scale with ffmpeg-api. I want to extract frame from video and save it as .jpg. Maybe I should set some av_dictionory ?


Thanks for reading and hope you can leave some answer.