Recherche avancée

Médias (91)

Autres articles (80)

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

  • Configuration spécifique pour PHP5

    4 février 2011, par

    PHP5 est obligatoire, vous pouvez l’installer en suivant ce tutoriel spécifique.
    Il est recommandé dans un premier temps de désactiver le safe_mode, cependant, s’il est correctement configuré et que les binaires nécessaires sont accessibles, MediaSPIP devrait fonctionner correctement avec le safe_mode activé.
    Modules spécifiques
    Il est nécessaire d’installer certains modules PHP spécifiques, via le gestionnaire de paquet de votre distribution ou manuellement : php5-mysql pour la connectivité avec la (...)

Sur d’autres sites (4642)

  • converting a "gif" to video using swift

    3 décembre 2019, par James Woodrow

    I’ve looked around and found a few things here and there, mainly that I should be using AVAssetWriter to do this but I have 0 experience with this and video editing/creation so it doesn’t help me much since I can’t seem to find anything that does something I can modify easily (or not at my level of knowledge at least) so that it works as I intend it to.

    I have an app which takes n photos every cft (capture frame time which I get from a backend server) seconds (it’s a double for obvious reasons) I then display these frames using a UIImageView and the frames change every dft (display frame time which I also get from a backend server and can be different from cft). Up until this point nothing complicated.

    now what is currently the workflow is that these frames are sent back to a server with any relevant information I want and then the server would use imagemagick to create a real gif file and ffmpeg to create a 15 seconds video using said gif.

    the issue is this makes it so that my heroku server bills aren’t as low as I would like because of the limited memory on the dynos and the time it takes to generate these videos is of about 5-10 seconds I believe (not sure but it’s longer than I’d like)

    So the idea I had was to make the app create the video since he already has all the information he needs for this, and then simply upload it with the rest of the frames and relevant data. Using bandwidth nowadays is much cheaper than buying extra processing power on a server.

    • he has n frames to loop over
    • he has a float value representing how long each frame should last dft
    • he has a gpu or at least a much better cpu than the dynos heroku have to offer

    I’ve also looked around to see if anyone made an extensive tutorial on how to use ffmpeg in swift but I still didn’t find anything at my level and I didn’t even find a tutorial per se, only some GitHub projects which were partially completed and/or without the original tutorial linked to understand the thought process.

    I would appreciate any tips/code sample/tutorials on the subject.

    I’m adding the ffmpeg command line equivalent to what I would love to be able to do (if I could use ffmpeg directly with iOS this could be nice too)

    ffmpeg -framerate 100/13 -loop 1 -i frame%02d.png -c:v libx264 -r 100/13 -pix_fmt yuv420p -t 0:15 instagram.mp4

    where basically I did 100 / (dft * 100) for the input frame rate and just output at the same fps for 15 seconds. by the way if there are any ways to optimise this command to make it run faster without losing quality I might be able to keep the current way of functioning with heroku although I would still prefer some iOS solution.

  • Portable YUV Drawing Context

    1er juin 2017, par Leif Andersen

    I have a stream of YUV data (from a video file) that I want to stream to a screen in real time. (Basically, I want to write a program that plays the video in real time.)

    As such, I am looking for a portable way to send YUV data to the screen. I would ideally like to use something portable so I don’t have to reimplement it for every major platform.

    I have found a few options, but all of them seem to have significant issues. They are :

    1. Use OpenGL directly, converting the YUV data to RGB. (And using the single quad for the whole screen trick.)

    This obviously won’t work because converting RGB to YUV on the CPU is going to be too slow for displaying images in real time.

    1. Use OpenGL, but use a shader to convert the YUV stream to RGB.

    This option is a bit better. Although the problem here is that (afaict), this will involve making two streams and splicing them together. It might work, but may have issues with larger resolutions.

    1. Instead use SDL, which has the option of creating a YUV context directly.

    The problem with this is I already am using a cross platform widget library for other aspects of my program (such as playback controls). As far as I can tell, SDL only opens up in its on (possibly borderless) window. I would ideally like my controls and drawing context to be in the same window. Which I can do with opengl, but not SDL.

    1. Use SDL, and also use something like Qt for the on screen widgets, use a message passing protocol to communicate between the two libraries. Have the (borderless) SDL window constantly move itself on top of the opengl window.

    While this approach is clever, it seems like the two windows could get out of sink easily making the user experience sub-optimal.

    1. Forget a cross platform library, do thinks OS specific, making use of hardware acceleration if present.

    This is a fine solution although its not cross platform.

    As such, is there any good way to draw YUV data to a screen that ideally is :

    1. Portable (at least to the major platforms).
    2. Fast enough to be real time.
    3. Allows other widgets in the same window.
  • Rebuilding Website for sharing videos

    22 novembre 2015, par Léo Le Gall

    Some friends and I run a sport forum with a decent user base. A lot of users wanted the ability to share video clips of their tricks. We didn’t really think anyone would use a video website that we made, so I built a really simple one just to see if the users would really use it. I hosted the video site on a $10 VPS, and it got blown away. The site was literally garbage, it was not visually appealing and the performance was just sad. Just as expected really, since this was just a test site. Since our test project was a success, we want to create a new and more polished site for the videos.

    The website is really simple, and probably not optimized in any way at the moment. I will try to explain in details what the website does. The user uploads some video files in format X, the website combines them and converts (using ffmpeg) the final video to mp4(so it can be served with a html5 video player). The users gets a link (example.com/randomvideo) where he can see the video through a html5 player serving the mp4 file(just default html5, nothing fancy). The videos only contains highlights, and the final video is always under 1 minute, most are around 30 seconds.

    Currently everything happens on the same server, both the video processing and the video serving. I can try to show how it works :

    1. User uploads some videos
    2. Servers stores the videos in a new random folder
    3. Combine videos and convert to mp4 (ffmpeg)
    4. Move final video(random name) to directory containing processed videos
    5. Store the name of the video file in the database (for website to serve it)
    6. Delete directory used to process the videos

    I want to rebuild the website’s architecture to make it able to scale and handle heavy load. I have never done this before and I am currently making a plan how to do this. My plan is at the moment :

    1. Seperate servers for processing videos and viewing videos (all VPS in the start)
    2. Utilize content delivery network for serving static files
    3. Utilize load balancers both for servers proccesing and serving

    I don’t really know what I should do with database(s). Can I do with one database or should use more ? They do not need to store sensitive information, since the auth in done through an API. They only need store information about the videos. I have experience with postgresql, mysql and redis, but I am not limited to those. What would you recommend in terms of scaleability ?

    I will appriciate all the feedback I can get regarding my plan and what to do about databases. I know this might be a bit vague, so please ask me if I have forgotten anything imporant. Thanks for reading.