Recherche avancée

Médias (1)

Mot : - Tags -/book

Autres articles (32)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • Le plugin : Podcasts.

    14 juillet 2010, par

    Le problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
    Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
    Types de fichiers supportés dans les flux
    Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)

  • Les images

    15 mai 2013

Sur d’autres sites (3316)

  • Using ffmpeg with Imagick

    19 mars 2014, par user3240613

    I am trying to generate thumbnails from videos in imagick, by extracting a single frame from them, using the ffmpeg application.

    I use this code currently :

    $image->newPseudoImage( null, null, 'ffmpeg:video.mp4[50]');

    It works. But it is not an ideal solution.
    I want to generate the thumbnail from a 50% position in the video, but i do not know how long the video is, so I can't do something like ffmpeg:video.mp4[500001]. And even if I knew the length, I still couldn't do it because running this ffmpeg:video.mp4[1000] takes almost 20 seconds to execute (ffmpeg:video.mp4[50] takes one or two seconds only).

    When i try to add some extra parameters like "ffmpeg:video.mp4[50] -ss 50" it returns error.

    The only other option I can think of, is using the exec to directly execute the ffmpeg command in the shell like "ffmpeg -i video.mp4 -vframes 1 -o screenshot.jpg" or something like that. Would that actually be more efficient solution than using the newpseudoimage method ?

  • Streaming protocol relay without involving codec

    4 décembre 2015, par kiran_g

    I am trying to use libav to relay an RTSP stream. It involves PULLing the stream from an IP camera and then PUSHing to wowza.

    The video encoding in the IP camera stream is h264. To enable h264 in my libav application I need to enable x264. But as x264 is GPL, it will not work with my business plan.

    My questions is whether libav (ffmpeg) can be made to work like a dumb relay which is encoding-agnostic ? so that I dont need to integrate x264 with ffmpeg.

    This SO post says that I can use the "copy" argument, but does that allow me to exclude x264 ?

    BTW, is x264 actually needed by ffmpeg for decoding h264 ? Is x264 only used in encoding ?

    See here for my current code.

  • Rebuilding Website for sharing videos

    22 novembre 2015, par Léo Le Gall

    Some friends and I run a sport forum with a decent user base. A lot of users wanted the ability to share video clips of their tricks. We didn’t really think anyone would use a video website that we made, so I built a really simple one just to see if the users would really use it. I hosted the video site on a $10 VPS, and it got blown away. The site was literally garbage, it was not visually appealing and the performance was just sad. Just as expected really, since this was just a test site. Since our test project was a success, we want to create a new and more polished site for the videos.

    The website is really simple, and probably not optimized in any way at the moment. I will try to explain in details what the website does. The user uploads some video files in format X, the website combines them and converts (using ffmpeg) the final video to mp4(so it can be served with a html5 video player). The users gets a link (example.com/randomvideo) where he can see the video through a html5 player serving the mp4 file(just default html5, nothing fancy). The videos only contains highlights, and the final video is always under 1 minute, most are around 30 seconds.

    Currently everything happens on the same server, both the video processing and the video serving. I can try to show how it works :

    1. User uploads some videos
    2. Servers stores the videos in a new random folder
    3. Combine videos and convert to mp4 (ffmpeg)
    4. Move final video(random name) to directory containing processed videos
    5. Store the name of the video file in the database (for website to serve it)
    6. Delete directory used to process the videos

    I want to rebuild the website’s architecture to make it able to scale and handle heavy load. I have never done this before and I am currently making a plan how to do this. My plan is at the moment :

    1. Seperate servers for processing videos and viewing videos (all VPS in the start)
    2. Utilize content delivery network for serving static files
    3. Utilize load balancers both for servers proccesing and serving

    I don’t really know what I should do with database(s). Can I do with one database or should use more ? They do not need to store sensitive information, since the auth in done through an API. They only need store information about the videos. I have experience with postgresql, mysql and redis, but I am not limited to those. What would you recommend in terms of scaleability ?

    I will appriciate all the feedback I can get regarding my plan and what to do about databases. I know this might be a bit vague, so please ask me if I have forgotten anything imporant. Thanks for reading.