
Recherche avancée
Médias (91)
-
Spitfire Parade - Crisis
15 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Wired NextMusic
14 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
-
Sintel MP4 Surround 5.1 Full
13 mai 2011, par
Mis à jour : Février 2012
Langue : English
Type : Video
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (37)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (4277)
-
what is the faster way to load a local image using javascript and / or nodejs and faster way to getImageData ?
4 octobre 2020, par Tom LecozI'm working on a video-editing-tool online for a large audience.
Users can create some "scenes" with multiple images, videos, text and sound , add a transition between 2 scenes, add some special effects, etc...


When the users are happy with what they made, they can download the result as a mp4 file with a desired resolution and framerate. Let's say full-hd-60fps for example (it can be bigger).


I'm using nodejs & ffmpeg to build the mp4 from HtmlCanvasElement.
Because it's impossible to seek perfectly frame-by-frame with a HtmlVideoElement, I start to convert the videos from each "scene" in a sequence of png using ffmpeg.
Then, I read my scene frame by frame and , if there are some videos, I replace the videoElements by an image containing the right frame. Once every images are loaded, I launch the capture and go to the next frame.


Everythings works as expected but it's too slow !
Even with a powerfull computer (ryzen 3900X, rtx 2080 super, 32 gb of ram , nvme 970 evo plus) , in the best case, I can capture basic full-hd movie (if it contains videos inside) at 40 FPS.


It may sounds good enought but it's not.
Our company produce thousands of mp4 every day.
A slow encoding process means more servers at works so it will be more expensive for us.


Until now, my company used (and is still using) a tool based on Adobe Flash because the whole video-editing-tool was made with Flash. I was (and am) in charge to translate the whole thing into HTML. I reproduced every feature one by one during 4 years (it's by far my biggest project) and this is the very last step but even if the html-version of our player works very well, the encoding process is much slower than the flash version - able to encode full-hd at 90-100FPS - )


I put console.log everywhere in order to find what makes the encoding so slow and there are 2 bottlenecks :


As I said before, for each frame, if there are videos on the current scene, I replace video-elements by images representing the right frame at the right time. Since I'm using local files, I expected a loading time almost synchronous. It's not the case at all, it required more than 10 ms in most cases.


So my first question is "what is the fastest way to handle local image loading with javascript used as final output ?".


I don't care about the technology involved, I have no preference, I just want to be able to load my local image faster than what I get for now.


The second bottleneck is weird and to be honest I don't understand what's happening here.


When the current frame is ready to be captured, I need to get it's data using CanvasRenderingContext2D.getImageData in order to send it to ffmpeg and this particular step is very slow.


This single line


let imageData = canvas.getContext("2d").getImageData(0,0,1920,1080); 



takes something like 12-13 ms.
It's very slow !


So I'm also searching another way to extract the pixels-data from my canvas.


Few days ago, I found an alternative to getImageData using the new class called VideoFrame that has been created to be used with the classes VideoEncoder & VideoDecoder that will come in Chrome 86.
You can do something like that


let buffers:Uint8Array[] = [];
createImageBitmap(canvas).then((bmp)=>{
 let videoFrame = new VideoFrame(bmp);
 for(let i = 0;i<3;i++){
 buffers[i] = new Uint8Array(videoFrame.planes[id].length);
 videoFrame.planes[id].readInto(buffers[i])
 }
})



It allow me to grab the pixel data around 25% quickly than getImageData but as you can see, I don't get a single RGBA buffer but 3 weirds buffers matching with I420 format.


In an ideal way, I would like to send it directly to ffmpeg but I don't know how to deals with these 3 buffers (i have no experience with I420 format) .


I'm not sure at all the solution that involve VideoFrame is a good one. If you know a faster way to transfer the data from a canvas to ffmpeg, please tell me.


Thanks for reading this very long post.
Any help would be very appreciated


-
Trying to emulate a hardware camera with a stream from a raspberry pi/picam ?
19 avril 2021, par John DoeI have a raspberry pi and pi cam v2 connected to wifi. I am trying to stream the camera feed to a laptop over wifi, then make that stream appear as if it were a hardware camera on the laptop. I want to do this in order to get it into some other software that requires a hardware camera (can't rewrite this software, just have to work around it).


I am running ubuntu 18.04 on the laptop. From my research, this should be very possible using some combination of ffmpeg and vfl2loopback. I started out by installing this library to stream the camera to a webserver : https://github.com/silvanmelchior/RPi_Cam_Web_Interface This works, and I am able to access the camera stream in my browser at http://10.0.0.47/http I set it up with no username/password for simplicity. I believe the camera port is 80 based on nmap output.


I am now trying to redirect the stream on my ubuntu laptop. Based on my research and experimentation, this command is the closest I've come :


ffmpeg -re -i http://10.0.0.47/html -map 0:v -f v4l2 /dev/video0


(or also the same with port 80 instead of /html, not sure if this matters) :
ffmpeg -re -i http://10.0.0.47:80 -map 0:v -f v4l2 dev/video0


This seems to work at first, but then produces the following error :


http://10.0.0.47:80 : Invalid data found when processing input


From googling, this may be due to the wrong kind of stream coming out of the rpi_cam_web interface, but I am not sure if this is true and if so how to fix it. I also investigated the html code of the page that accesses the webserver, and I can see that it is sending a series of jpgs, that change constantly with timestamps in the file name. So maybe this the issue, but again, unsure.


Any ideas ? Help would be much appreciated.


Edit : I tried another method, which seems to get a little further on, but am still running into issues :


(on the pi) : raspivid -o - -t 0 -n -w 320 -h 240 -fps 30| cvlc -vvv stream :///dev/stdin —sout '#rtpsdp=rtsp ://:8000/' :demux=h264


—>this starts a stream that I am able to succesfully view in VLC media player


(then on the host system) : gst-launch-1.0 -v rtspsrc location=rtsp ://10.0.0.47:8000/ ! v4l2sink device=/dev/video4


I tried various devices, like /dev/video0, /dev/video1, etc. They all produce '"/dev/videoX" is not a output device', except for video4, which seems to be working at first, but then errors out with :


ERROR : from element /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0/GstUDPSrc:udpsrc1 : Internal data stream error.
Additional debug info :
gstbasesrc.c(3072) : gst_base_src_loop () : /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0/GstUDPSrc:udpsrc1 :
streaming stopped, reason not-linked (-1)
Execution ended after 0:00:00.082360368


Any idea what might be going wrong there ?


Edit 2 :


I may have it working with the following sequence of commands :


(on pi) : raspivid -o - -t 0 -n -w 320 -h 240 -fps 30| cvlc -vvv stream :///dev/stdin —sout '#rtpsdp=rtsp ://:8000/' :demux=h264


(on host computer) : ffmpeg -f h264 -i tcp ://10.0.0.47:8000/ -f v4l2 -pix_fmt yuv420p /dev/video0


This doesn't throw any errors, but I'm not 100% sure it's working cause I haven't been able to load the stream in software I'm trying to get the camera feed into yet. I tried testing it with this website tool :




and the tool allows me to select "Dummy Video Device 0x0000", which I'm pretty sure is it, but it then tells me "no video device detected", fails to find any camera in its testing, and then the same Dummy Video Device doesn't show up as an option on subsequent page reloads. So I think there's something wrong about how I'm passing the stream


-
avformat/rtsp : Send mode=record instead of mode=receive in Transport header
15 janvier 2024, par Paul Orlykavformat/rtsp : Send mode=record instead of mode=receive in Transport header
Fixes server compatibility issues with rtspclientsink GStreamer plugin.
>From specification :
RFC 7826 "Real-Time Streaming Protocol Version 2.0" (https://datatracker.ietf.org/doc/html/rfc7826), section 18.54 :
mode : The mode parameter indicates the methods to be supported for
this session. The currently defined valid value is "PLAY". If
not provided, the default is "PLAY". The "RECORD" value was
defined in RFC 2326 ; in this specification, it is unspecified
but reserved. RECORD and other values may be specified in the
future.
RFC 2326 "Real Time Streaming Protocol (RTSP)" (https://datatracker.ietf.org/doc/html/rfc2326), section 12.39 :
mode :
The mode parameter indicates the methods to be supported for
this session. Valid values are PLAY and RECORD. If not
provided, the default is PLAY.mode=receive was always like this, from the initial commit 'a8ad6ffa rtsp : Add listen mode'.
For comparison, Wowza was used to push RTSP stream to. Both GStreamer and FFmpeg had no issues.
Here is the capture of Wowza responding to SETUP request :
200 OK
CSeq : 3
Server : Wowza Streaming Engine 4.8.26+4 build20231212155517
Cache-Control : no-cache
Expires : Mon, 15 Jan 2024 19:40:31 GMT
Transport : RTP/AVP/UDP ;unicast ;client_port=11640-11641 ;mode=record ;source=172.17.0.2 ;server_port=6976-6977
Date : Mon, 15 Jan 2024 19:40:31 GMT
Session : 1401457689 ;timeout=60Test setup :
Server : ffmpeg -loglevel trace -y -rtsp_flags listen -i rtsp ://0.0.0.0:30800/live.stream t.mp4
FFmpeg client : ffmpeg -re -i "Big Buck Bunny - FULL HD 30FPS.mp4" -c:v libx264 -f rtsp rtsp ://127.0.0.1:30800/live.stream
GStreamer client : gst-launch-1.0 videotestsrc is-live=true pattern=smpte ! queue ! videorate ! videoscale ! video/x-raw,width=640,height=360,framerate=60/1 ! timeoverlay font-desc="Sans, 84" halignment=center valignment=center ! queue ! videoconvert ! tee name=t t. ! x264enc bitrate=9000 pass=cbr speed-preset=ultrafast byte-stream=false key-int-max=15 threads=1 ! video/x-h264,profile=baseline ! queue ! rsink. audiotestsrc ! voaacenc ! queue ! rsink. t. ! queue ! autovideosink rtspclientsink name=rsink location=rtsp ://localhost:30800/live.streamTest results :
modified FFmpeg client -> stock server : ok
stock FFmpeg client -> modified server : ok
modified FFmpeg client -> modified server : ok
GStreamer client -> modified server : okSigned-off-by : Paul Orlyk <paul.orlyk@gmail.com>
Signed-off-by : Michael Niedermayer <michael@niedermayer.cc>