Recherche avancée

Médias (91)

Autres articles (49)

  • Emballe Médias : Mettre en ligne simplement des documents

    29 octobre 2010, par

    Le plugin emballe médias a été développé principalement pour la distribution mediaSPIP mais est également utilisé dans d’autres projets proches comme géodiversité par exemple. Plugins nécessaires et compatibles
    Pour fonctionner ce plugin nécessite que d’autres plugins soient installés : CFG Saisies SPIP Bonux Diogène swfupload jqueryui
    D’autres plugins peuvent être utilisés en complément afin d’améliorer ses capacités : Ancres douces Légendes photo_infos spipmotion (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (6365)

  • Writing Live-Multimedia-Application using OpenGL & Co. saving output to disc [closed]

    21 janvier 2013, par user1997286

    I want to write an application that does the following thing :

    • Getting Commands via ArtNET (DMX over Ethernet, a Control Protocol) for each object (called Layer)
    • each Layer could be one of the following : Live Camera Stream, Movie, Image
    • each layer could be translated, rotated or stretched
    • on each layer I can set filters (Like a Kaleidoscope Effect, Blur, Color Correction, etc.)
    • the rsulting video-stream is in the 3d-space
    • I want to display each part of the image on one Projector (in total up to 3 ones) using a TripleHead2GO (3 Projectors display a different region of my DVI-Output). Each Projecector-Image should have own Soft-Edge and Keystone parameters.
    • the resulting image will also be shown on a Preview-Screen with some Information overlay.

    I think all that should be possible with opengl and openal (for the movie audio)

    I think I'll use C++, OpenGL for Graphics, OpenAL for Audio, if needed ffmpeg for Video conversion, Ubuntu/Debian as OS.

    The software is used to do Multimedia-Shows on Concerts including Cameras & Co.

    All that should happen Live (On a FullHD output), Having i7 3770, GLX 670 and 16GB of Ram for at least 8 Layers. (4 Live-Images at once + Some Overlays like the Actors Name and some Logos)

    But now comes the question.

    Is it also Posible to do the following with that setting :

    • Writing the output Image with all the 3d translations to a Movie File (To Master a DVD later) with Audio
    • Mixing Audio from different Inputs & Files (Ambience Mics, Signal from the Sound Mixer, Playbacks from my own application) to more than one Mix (eg. one Mix for the Recording, one Mix for Live)
    • Stream that Output Complete or in Parts (e.g. the left Part of the Image) over the Network (For example, Projector 1 is near the Server, so I connect it using DVI, Projector 2+3 is connected to a Computer that receives the streams for that two projectors (with soft edge on each stream) and Screen 4 is outside the Concert Hall and shows the complete Live-Stream.
    • What GUI-Framework should I use for that ?
    • is it perhaps event performant enough to use Java for that ?
    • is it posible to use that mechanism for just rendering (eg. I have stored the cut points on Disc and saved every single camera stream to change some errors later or cut out some parts)
  • Complex Text Animations with FFMPEG [closed]

    5 novembre 2020, par Judah Meek

    I'm trying to figure out how the text animations used by https://storycreatorapp.com/demo (click on "text" on the left sidebar to view them) are created.

    


    I understand that the drawtext filter usually is used to draw the text, but what filter is used to hide the text at the beginning of the text animation (most of the animations have text seemingly to slide out from nowhere right in the middle of the screen) ?

    


  • How to implement Text Animations with FFMPEG ?

    5 décembre 2018, par Ravindra Samdhiya

    I want to achieve text animations in video. example given below

    [https://www.youtube.com/watch?v=_sLeWcwBJ_s][1]

    Is this possible with FFMPEG or there is any other way possible ? i have implemented text in and out with FFMPEG but now i want to add animation effects. Is this possible ?