
Recherche avancée
Autres articles (49)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)
Sur d’autres sites (7664)
-
How do I calculate optimal dimensions and bitrate for displaying a video on an iPhone ?
6 février 2020, par wachutuI’m currently developing a mobile app that will have a library of 2-5 minute videos (approx 100 in total) and going through the process of determining which versions of the videos to have ready to serve to different mobile devices. In my research, I have noticed that there is a lot of room to play with video settings such as dimensions and bitrate.
As a first test, I am attempting to find the minimum video size I can deliver to an iPhone XS with dimensions 1125x2436 without losing any noticeable quality. I started by scaling the video to 1125x2436 and creating versions with 5 different bitrates ranging from 500kbps-4400kbps. I noticed that at 1500kbps, the video looks great and the size is cut 1/3 so that was a good start.
Then after doing some reading, I saw that in adaptive bitrate scenarios Apple recommends delivering video of lower bitrate AND lower resolution. So in my next test I just cut both in half - scaled to 562x1218 and bitrate at 750kbps and noticed the video also looked great on the iPhone. So 1125x2436 at 750kbps looks bad, but 562x1218 at 750kbps looks great on the same device. To some extent this makes sense to me as you need less bits to fill a smaller screen but what I’m not understanding is how the scaling plays a factor. Shouldn’t it essentially pixelate because the resolution is 1/2 of the iPhone dimensions ? And at a higher level, is there a somewhat concrete way to figure out this optimal resolution / bitrate balance given the dimensions of a device ? We want to most modern smartphones (iPhone 6 and later, Samsung Galaxy, etc.) so we need to be prepared for a range of dimensions (aspect ratios 9:16 or 6:13).
-
Pipe live stream from NodeJS Readable to ffmpeg
22 septembre 2022, par Abbas YassiniI am using Twilio for transfering audio calls to a meeting room on top of Mediasoup.
Using Twilio websocket I get media stream encoded in base64 and push that to a NodeJS Readable stream. Then I will pipe that readable stream to a ffmpeg process I created before, using NodeJS spawn()


The problem is that only 92 bytes pipes into ffmpeg and so the output is not playable.
I dont have any idea if the options of ffmpeg is wrong or I'm using Readable stream bad way.


const process = spawn('ffmpeg', [
 '-loglevel',
 'debug',
 '-re',
 '-protocol_whitelist',
 'pipe,udp,rtp',
 '-f',
 'mulaw',
 '-i',
 'pipe:0',
 '-map',
 '0:a',
 '-c:a',
 'pcm_mulaw',
 'output.wav'
 ]);

const rstream = new Readable({ encoding: 'binary' })
rstream._read = () => {};

rstream.resume();
rstream.pipe(process.stdin)



each time I get media from Twilio socket as base64, I push that chunk into
rstream
using :

const mediaBytes = Buffer.from(base64Chunk, "base64")
const count = mediaBytes.byteLength

rstream.push(Buffer.from([0x52, 0x49, ... /* PCM MU_LAW HEADER 54 Bytes */]))
rstream.push(Buffer.from([
 count % 256,
 (count >> 8) % 256,
 (count >> 16) % 256,
 (count >> 24) % 256,
 ]))
rstream.push(mediaBytes)



-
Low latency video shared in local gigabit network using linux [on hold]
6 mai 2017, par user3387542For a robotics task we need to share the video (Webcam) live to about 6 or 7 users in the same room. OpenCV will be used on the clients to read the situation and send new tasks to the robots. Latency should not be much more than one second, the lower the better. What commands would you recommend for this ?
We have one camera on a Linux host which wants to share the video to about 6 other units just some meters away.
I already experimented with different setups. While raw-video looks like perfectly latency free (local loopback, the issue is the amount of data), any compression suddenly ads about a second delay.
And how should we share this in the network. Is broadcasting the right approach ? How can it be so hard, they are right next to each other.Works locally, issues over the network.
#server
ffmpeg -f video4linux2 -r 10 -s 1280x720 -i /dev/video0 -c:v libx264 -preset veryfast -tune zerolatency -pix_fmt yuv420p -f mpegts - | socat - udp-sendto:192.168.0.255:12345,broadcast
#client
socat -u udp-recv:12345,reuseaddr - | vlc --live-caching=0 --network-caching=0 --file-caching=0 -raw video - perfectly fine like this, video with many artefacts if sent over the network
ffmpeg -f video4linux2 -r 10 -s 1280x720 -i /dev/video0 -c:v rawvideo -f rawvideo -pix_fmt yuv420p - | vlc --demux rawvideo --rawvid-fps 10 --rawvid-width 1280 --rawvid-height 720 --rawvid-chroma I420 -
The technology used doesen’t matter, we do not care about network load either. Just want to use opencv on different clients using live data.