
Recherche avancée
Médias (1)
-
MediaSPIP Simple : futur thème graphique par défaut ?
26 septembre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Video
Autres articles (36)
-
Menus personnalisés
14 novembre 2010, parMediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
Menus créés à l’initialisation du site
Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (5108)
-
Update messages_sv.js. Closes #683
20 mars 2013, par splitfeedUpdate messages_sv.js. Closes #683
Fixed invalid HTML entities that made dateISO error look funny.
-
how to allow a worker to run a ffmpeg command on heroku for my python/django app ?
10 mars 2013, par GetItDoneI've been stuck trying to figure this out for weeks. I previously asked a similar question found here but I never got any replies. I really cannot find any good documentation anywhere. All I need to do is use a worker (don't care what worker have django-celery and rq installed) to convert a file to flv when it is uploaded from a form. I was able to get this done easily locally, but after over a week I haven't been able to get it to work no matter what I have tried. I tried adding a tasks.py file for celery, or a worker.py file for rq, and I have no idea what else (if anything) needs to be done, such as in my settings.py or Procfile. My procfile looks like :
web: gunicorn lftv.wsgi -b 0.0.0.0:$PORT
celeryd: celery -A tasks worker --loglevel=info
worker: python worker.pyMy requirements.txt showing what I have installed looks like this :
Django==1.4.3
Logbook==0.4.1
amqp==1.0.6
anyjson==0.3.3
billiard==2.7.3.19
boto==2.6.0
celery==3.0.13
celery-with-redis==3.0
distribute==0.6.31
dj-database-url==0.2.1
django-celery==3.0.11
django-s3-folder-storage==0.1
django-storages==1.1.6
gunicorn==0.16.1
kombu==2.5.4
pil==1.1.7
psycopg2==2.4.5
python-dateutil==1.5
pytz==2012j
redis==2.7.2
requests==1.1.0
rq==0.3.2
six==1.2.0
times==0.6The only thing relevant in my settings.py are as follows :
BROKER_BACKEND = 'django'
BROKER_URL = #For this I copy/pasted the code from my redistogo add-on from heroku. Not sure if correct
BROKER_TRANSPORT_OPTIONS = {'visibility_timeout': 1800}Without trying to take up too much more space, my tasks.py looks like this :
import subprocess
@task
def ffmpeg_conversion(input_file):
converted_file = subprocess.call(input_file)
return converted_fileI use S3 to store my static and media files, and the upload works (adding uploads to my bucket), however no matter what I try the conversion never will. Is there a good tutorial for absolute beginners ? I followed the heroku redis tutorial, celery docs, rq docs, and whatever else I can find, and got the examples to work, but the worker will not execute the command from my view. For example one of the many things I tried :
...
ffmpeg = "ffmpeg -i %s -acodec mp3 -ar 22050 -f flv -s 320x240 %s" % (sourcefile, targetfile)
ffmpegresult = ffmpeg_conversion.delay(ffmpeg)
...or using rq
...
q = Queue(connection=conn)
result = q.enqueue(ffmpeg_conversion, ffmpeg)
...I seems like it should be simple, however I am completely self-taught and have never deployed a project whatsoever, and there just doesn't seem to be any good documentation or tutorial available for what I am trying to do. I can't judge whether I am completely off and completely missing something significant or relatively close to getting this to work. I really do appreciate any input whatsoever, this is driving me nuts. Thanks in advance.
-
Writing Live-Multimedia-Application using OpenGL & Co. saving output to disc [closed]
21 janvier 2013, par user1997286I want to write an application that does the following thing :
- Getting Commands via ArtNET (DMX over Ethernet, a Control Protocol) for each object (called Layer)
- each Layer could be one of the following : Live Camera Stream, Movie, Image
- each layer could be translated, rotated or stretched
- on each layer I can set filters (Like a Kaleidoscope Effect, Blur, Color Correction, etc.)
- the rsulting video-stream is in the 3d-space
- I want to display each part of the image on one Projector (in total up to 3 ones) using a TripleHead2GO (3 Projectors display a different region of my DVI-Output). Each Projecector-Image should have own Soft-Edge and Keystone parameters.
- the resulting image will also be shown on a Preview-Screen with some Information overlay.
I think all that should be possible with opengl and openal (for the movie audio)
I think I'll use C++, OpenGL for Graphics, OpenAL for Audio, if needed ffmpeg for Video conversion, Ubuntu/Debian as OS.
The software is used to do Multimedia-Shows on Concerts including Cameras & Co.
All that should happen Live (On a FullHD output), Having i7 3770, GLX 670 and 16GB of Ram for at least 8 Layers. (4 Live-Images at once + Some Overlays like the Actors Name and some Logos)
But now comes the question.
Is it also Posible to do the following with that setting :
- Writing the output Image with all the 3d translations to a Movie File (To Master a DVD later) with Audio
- Mixing Audio from different Inputs & Files (Ambience Mics, Signal from the Sound Mixer, Playbacks from my own application) to more than one Mix (eg. one Mix for the Recording, one Mix for Live)
- Stream that Output Complete or in Parts (e.g. the left Part of the Image) over the Network (For example, Projector 1 is near the Server, so I connect it using DVI, Projector 2+3 is connected to a Computer that receives the streams for that two projectors (with soft edge on each stream) and Screen 4 is outside the Concert Hall and shows the complete Live-Stream.
- What GUI-Framework should I use for that ?
- is it perhaps event performant enough to use Java for that ?
- is it posible to use that mechanism for just rendering (eg. I have stored the cut points on Disc and saved every single camera stream to change some errors later or cut out some parts)