
Recherche avancée
Médias (91)
-
Head down (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Echoplex (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Discipline (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Letting you (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
1 000 000 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
999 999 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (15)
-
Taille des images et des logos définissables
9 février 2011, parDans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...) -
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...) -
Déploiements possibles
31 janvier 2010, parDeux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
Version mono serveur
La version mono serveur consiste à n’utiliser qu’une (...)
Sur d’autres sites (2836)
-
how to allow a worker to run a ffmpeg command on heroku for my python/django app ?
10 mars 2013, par GetItDoneI've been stuck trying to figure this out for weeks. I previously asked a similar question found here but I never got any replies. I really cannot find any good documentation anywhere. All I need to do is use a worker (don't care what worker have django-celery and rq installed) to convert a file to flv when it is uploaded from a form. I was able to get this done easily locally, but after over a week I haven't been able to get it to work no matter what I have tried. I tried adding a tasks.py file for celery, or a worker.py file for rq, and I have no idea what else (if anything) needs to be done, such as in my settings.py or Procfile. My procfile looks like :
web: gunicorn lftv.wsgi -b 0.0.0.0:$PORT
celeryd: celery -A tasks worker --loglevel=info
worker: python worker.pyMy requirements.txt showing what I have installed looks like this :
Django==1.4.3
Logbook==0.4.1
amqp==1.0.6
anyjson==0.3.3
billiard==2.7.3.19
boto==2.6.0
celery==3.0.13
celery-with-redis==3.0
distribute==0.6.31
dj-database-url==0.2.1
django-celery==3.0.11
django-s3-folder-storage==0.1
django-storages==1.1.6
gunicorn==0.16.1
kombu==2.5.4
pil==1.1.7
psycopg2==2.4.5
python-dateutil==1.5
pytz==2012j
redis==2.7.2
requests==1.1.0
rq==0.3.2
six==1.2.0
times==0.6The only thing relevant in my settings.py are as follows :
BROKER_BACKEND = 'django'
BROKER_URL = #For this I copy/pasted the code from my redistogo add-on from heroku. Not sure if correct
BROKER_TRANSPORT_OPTIONS = {'visibility_timeout': 1800}Without trying to take up too much more space, my tasks.py looks like this :
import subprocess
@task
def ffmpeg_conversion(input_file):
converted_file = subprocess.call(input_file)
return converted_fileI use S3 to store my static and media files, and the upload works (adding uploads to my bucket), however no matter what I try the conversion never will. Is there a good tutorial for absolute beginners ? I followed the heroku redis tutorial, celery docs, rq docs, and whatever else I can find, and got the examples to work, but the worker will not execute the command from my view. For example one of the many things I tried :
...
ffmpeg = "ffmpeg -i %s -acodec mp3 -ar 22050 -f flv -s 320x240 %s" % (sourcefile, targetfile)
ffmpegresult = ffmpeg_conversion.delay(ffmpeg)
...or using rq
...
q = Queue(connection=conn)
result = q.enqueue(ffmpeg_conversion, ffmpeg)
...I seems like it should be simple, however I am completely self-taught and have never deployed a project whatsoever, and there just doesn't seem to be any good documentation or tutorial available for what I am trying to do. I can't judge whether I am completely off and completely missing something significant or relatively close to getting this to work. I really do appreciate any input whatsoever, this is driving me nuts. Thanks in advance.
-
How can i stream through ffmpeg a canvas generated in Node.js to youtube/any other rtmp server ?
10 octobre 2020, par DDCi wanted to generate some images in Node.JS, compile them to a video and stream them to youtube. To generate the images i'm using the node-canvas module. This sounds simple enough, but i wanted to generate the images continuously, and stream the result in realtime. I'm very new to this whole thing, and what i was thinking about doing, after reading a bunch of resources on the internet was :


- 

- Open ffmpeg with
spawn('ffmpeg', ...args)
, setting the output to the destination rtmp server - Generate the image in the canvas
- Convert the content of the canvas to a buffer, and write it to the ffmpeg process through stdin
- Enjoy the result on Youtube










But it's not as simple as that, is it ? I saw people sharing their code involving client-side JS running on the browser, but i wanted it to be a Node app so that i could run it from a remote VPS.
Is there a way for me to do this without using something like p5 in my browser and capturing the window to restream it ?
Is my thought process even remotely adequate ? For now i don't really care about performance/resources usage. Thanks in advance.


EDIT :


I worked on it for a bit, and i couldn't get it to work...
This is my code :


const { spawn } = require('child_process');
const { createCanvas } = require('canvas');
const fs = require('fs');


const canvas = createCanvas(1920, 1080);
const ctx = canvas.getContext('2d');
const ffmpeg = spawn("ffmpeg",
 ["-re", "-f", "png_pipe", "-vcodec", "png", "-i", "pipe:0", "-vcodec", "h264", "-re", "-f", "flv", "rtmp://a.rtmp.youtube.com/live2/key-i-think"],
 { stdio: 'pipe' })

const randomColor = (depth) => Math.floor(Math.random() * depth)
const random = (min, max) => (Math.random() * (max - min)) + min;

let i = 0;
let drawSomething = function () {
 ctx.strokeStyle = `rgb(${randomColor(255)}, ${randomColor(255)}, ${randomColor(255)})`
 let x1 = random(0, canvas.width);
 let x2 = random(0, canvas.width);
 let y1 = random(0, canvas.height);
 let y2 = random(0, canvas.height);
 ctx.moveTo(x1, y1);
 ctx.lineTo(x2, y2);
 ctx.stroke();

 let out = canvas.toBuffer();
 ffmpeg.stdin.write(out)
 i++;
 if (i >= 30) {
 ffmpeg.stdin.end();
 clearInterval(int)
 };
}

drawSomething();
let int = setInterval(drawSomething, 1000);




I'm not getting any errors, neither i am getting any video data from it. I have set up an rtmp server that i can connect to, and then get the stream with VLC, but i don't get any video data. Am i doing something wrong ? I Looked around for a while, and i can't seem to find anyone that tried this, so i don't really have a clue...


EDIT 2 :
Apparently i was on the right track, but my approach just gave me like 2 seconds of "good" video and then it started becoming blocky and messy. i think that, most likely, my method of generating images is just too slow. I'll try to use some GPU accelerated code to generate the images, instead of using the canvas, which means i'll be doing fractals all the time, since i don't know how to do anything else with that. Also, a bigger buffer in ffmpeg might help too


- Open ffmpeg with
-
How would I assign multiple MMAP's from single file descriptor ?
9 juin 2011, par Alex StevensSo, for my final year project, I'm using Video4Linux2 to pull YUV420 images from a camera, parse them through to x264 (which uses these images natively), and then send the encoded stream via Live555 to an RTP/RTCP compliant video player on a client over a wireless network. All of this I'm trying to do in real-time, so there'll be a control algorithm, but that's not the scope of this question. All of this - except Live555 - is being written in C. Currently, I'm near the end of encoding the video, but want to improve performance.
To say the least, I've hit a snag... I'm trying to avoid User Space Pointers for V4L2 and use mmap(). I'm encoding video, but since it's YUV420, I've been malloc'ing new memory to hold the Y', U and V planes in three different variables for x264 to read upon. I would like to keep these variables as pointers to an mmap'ed piece of memory.
However, the V4L2 device has one single file descriptor for the buffered stream, and I need to split the stream into three mmap'ed variables adhering to the YUV420 standard, like so...
buffers[n_buffers].y_plane = mmap(NULL, (2 * width * height) / 3,
PROT_READ | PROT_WRITE, MAP_SHARED,
fd, buf.m.offset);
buffers[n_buffers].u_plane = mmap(NULL, width * height / 6,
PROT_READ | PROT_WRITE, MAP_SHARED,
fd, buf.m.offset +
((2 * width * height) / 3 + 1) /
sysconf(_SC_PAGE_SIZE));
buffers[n_buffers].v_plane = mmap(NULL, width * height / 6,
PROT_READ | PROT_WRITE, MAP_SHARED,
fd, buf.m.offset +
((2 * width * height) / 3 +
width * height / 6 + 1) /
sysconf(_SC_PAGE_SIZE));Where "width" and "height" is the resolution of the video (eg. 640x480).
From what I understand... MMAP seeks through a file, kind of like this (pseudoish-code) :
fd = v4l2_open(...);
lseek(fd, buf.m.offset + (2 * width * height) / 3);
read(fd, buffers[n_buffers].u_plane, width * height / 6);My code is located in a Launchpad Repo here (for more background) :
http://bazaar.launchpad.net/ alex-stevens/+junk/spyPanda/files (Revision 11)And the YUV420 format can be seen clearly from this Wiki illustration : http://en.wikipedia.org/wiki/File:Yuv420.svg (I essentially want to split up the Y, U, and V bytes into each mmap'ed memory)
Anyone care to explain a way to mmap three variables to memory from the one file descriptor, or why I went wrong ? Or even hint at a better idea to parse the YUV420 buffer to x264 ? :P
Cheers ! ^^