Recherche avancée

Médias (91)

Autres articles (18)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

Sur d’autres sites (3251)

  • FFMPEG and AWS : What's the most efficient way to handle this ?

    28 mai 2022, par Red Vic

    I'm new to AWS and I originally built the FFmpeg functions on my Node.JS API. But I realized this is the wrong way to do it in a real-world app, and that you need to use separate Lambda functions in AWS that handle the video editing separately from the main server.

    


    I'm mainly a front-end developer but I'm open to learning new things.

    


    I basically have the following process in my app :

    


      

    • User uploads video.
    • 


    • I need to take that video and add a watermark to it.
    • 


    • I then need a copy of the watermarked video in a smaller resolution.
    • 


    • I then need a 6 seconds GIF of the smaller resolution video.
    • 


    • Finally, I need to upload the 3 edited files (2 .mp4's and 1 .gif) to S3, and remove the original, non-watermarked video.
    • 


    


    Here are my questions to be clear :

    


      

    • Should I upload the original file to S3 or to the server ? And why ?
    • 


    • Is the process above doable in a single Lambda function ? Or do I need more Lambda functions ?
    • 


    • How would you handle this problem, personally ?
    • 


    


    I originally built it by chaining one function to the next with promises, but AWS seems like a different world of doing things and the way I originally built it would not work.

    


    Thanks a lot.

    


    Update
Here are some tests I did with a couple videos :

    


    





    


    


    


    


    


    


    


    


    



    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    Test 1 Test 2 Test 3 Test 4 Test 5
    Original video resolution 1080p 1080p 1080p 1080p 480p
    Original video duration 23 minutes 15 minutes 11 minutes 3.5 minutes 5 minutes
    Step 1 duration (Watermarking original video) 30 minutes 18 minutes 14 minutes 4 minutes 2 minutes
    Step 2 duration (Watermarking lower resolution) 5 minutes 3 minutes 3 minutes 1 minute skip (already low res)
    Step 3 duration (6 seconds GIF creation) negligible (15 seconds) negligible (10 seconds) negligible (7 seconds) negligible negligible
    Total  35 minutes  21 minutes  17 minutes  5 minutes  2 minutes

    


  • Determine timestamp in mpg video of KLV streams

    22 février 2024, par Joshua Levoy

    I'm currently working with MISP compliant klv data containing mpg videos. I'm able to find the streams containing this data with ffmpeg using the following command

    


    `ffmpeg -v 0 -ss 0 -i "Day Flight.mpg" -map 0:1 -f framecrc  -`


    


    By varying the -ss argument I am able to select for different klv streams by excluding parts of the video, which leads me to believe that ffmpeg is capable of determining how far into the video these klv streams are. I'd very much be interested in finding these timestams of the KLV streams, such that I can create a process by which a misp compliant mpg video can be clipped into smaller segments, for which its relevant MISP-compliant klv data will be preserved.

    


    Is there a way to get ffmpeg to show me the timestamps where these KLV streams begin, and if not, does anyone know of any better approach to determining the position within the video of these klv streams ?

    


    you can obtain the mpg video file containing mpg data that I am using here https://samples.ffmpeg.org/MPEG2/mpegts-klv/Day%20Flight.mpg

    


    Currently I have run the line laid out above and receive the following output :

    


       #software: Lavf58.29.100 #tb 0: 1/90000 #media_type 0: data #codec_id 0: klv 0,          0,          0,        0,      163, 0xc7572adf, S=1,        1, 0x00bd00bd 0,          0,          0,        0,      163, 0xeddd2774, S=1,        1, 0x00bd00bd 0,          0,          0,        0,      163, 0x5c8d29b0, S=1,        1, 0x00bd00bd 0,          0,          0,        0,      163, 0xb7c428f5, S=1,        1, 0x00bd00bd 0,          0,          0,        0,      163, 0x3c3d28a5, S=1,        1, 0x00bd00bd 0,          0,          0,        0,      162, 0x5cdd2898, S=1,        1, 0x00bd00bd

    


    which is useful for determining there are KLV packets within the video but provides me very little in the way of the timestamp where they are relevant. I have other methods of extracting and decoding these KLV values at these locations, but what I'm interested in specifically, is knowing as close as possible the exact timestamps that the KLV streams are referring to within the 3 minute and 20 second video.

    


  • C++/CLI — 0xc000007b (INVALID_IMAGE_FORMAT) with /clr option on

    9 mars 2015, par OverMachoGrande

    I’m trying to build a C++/CLI executable to which I statically link ffmpeg (libavcodec, libavformat, libavutil & swscale). It works fine if I build it normally (without /clr, so no CLR support), it works. However, when I add CLR support, it won’t start up with a 0xc000007b. A "Hello World" C++/CLI app runs fine, though.

    Supposedly the same thing happens with Boost::Threads, but since ffmpeg is pure C, I doubt it’s using Boost.

    My config :

    • Visual Studio 2008 Professional SP1
    • Windows XP Pro SP3 (x86)
    • .NET Framework 3.5 SP1

    Thanks,
    Robert