Recherche avancée

Médias (0)

Mot : - Tags -/presse-papier

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (10)

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

  • Sélection de projets utilisant MediaSPIP

    29 avril 2011, par

    Les exemples cités ci-dessous sont des éléments représentatifs d’usages spécifiques de MediaSPIP pour certains projets.
    Vous pensez avoir un site "remarquable" réalisé avec MediaSPIP ? Faites le nous savoir ici.
    Ferme MediaSPIP @ Infini
    L’Association Infini développe des activités d’accueil, de point d’accès internet, de formation, de conduite de projets innovants dans le domaine des Technologies de l’Information et de la Communication, et l’hébergement de sites. Elle joue en la matière un rôle unique (...)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (2270)

  • Can FFMPEG be used to change the transparency of a single PNG ?

    19 juillet 2020, par Chameleon

    I know that FFMPEG can manipulate transparency using fade in/fade out over a series of frames. I just haven't found a way to generate a png with a specified transparency setting.

    


    I am in the process of creating a lot (20 in first set) of procedurally-generated videos. Each video requires several starts and stops, where with each stop, a png overlay describing what is seen will be displayed for several (actual number TBD, and may vary) frames. I'd like each overlay to fade in over x frames, display for y frames at full opacity and then fade out over y frames. The documentation on FFMPEG really sucks at explaining fade effects, and no one I've found showing usage actually explains what all of the parameters do.

    


    The original sources for the videos are CGI png frames with a transparent background. I will be making the descriptive overlays with transparent backgrounds (same resolution as the CGI frames) - I'm really hoping to not have to manually save each overlay (8 overlays per video at a minimum) with 4 to 6 (or more) transparency settings - The company is being real indecisive as to the length of the fade in/out or the "hang time" of the descriptive overlays.

    


    I have already created a Python script that uses FFMPEG to place overlays on specific frames, then uses FFMPEG to stitch all frames into a single video, adds a background to the video and then places the company's watermark on the video. It already manages the key frames and what images should be overlaid (and for how many frames). It is working well, but the company doesn't like the lack of fade. I already have the framework in place to manage the fade in/out duration (I'm just missing the answer to this question). I created the script because I have no doubt that I'll have to generate the final output on the first couple of videos a number of times to appease the stakeholders.

    


    I'm really hoping to find a fairly easy way to hand FFMPEG a png with a transparent background and have FFMPEG hand me a copy of the image whose non-transparent part is now "x percent" transparent (or "y percent" opaque). I know from failed attempts that FFMPEG doesn't choke trying to make a region more than 100% transparent.

    


    It's a real pain to get approval to install new software on the workstation, so I'm not actually interested in any suggestion that doesn't use FFMPEG or a pretty vanilla Python installation. - It's not that the other software might not be useful, it just means that if other software is needed, I'll have to manually create the frames.

    


  • avutil/half2float : move tables to header-internal structs

    9 août 2022, par Timo Rothenpieler
    avutil/half2float : move tables to header-internal structs
    

    Having to put the knowledge of the size of those arrays into a multitude
    of places is rather smelly.

    • [DH] libavcodec/exr.c
    • [DH] libavcodec/exrenc.c
    • [DH] libavcodec/pnm.h
    • [DH] libavcodec/pnmdec.c
    • [DH] libavcodec/pnmenc.c
    • [DH] libavutil/float2half.h
    • [DH] libavutil/half2float.h
  • ffmpeg replacing audio track with track from another video but different length (sync audio)

    27 août 2019, par Matt

    Background

    I have digitized some old Canon Video8 tapes. I used a SONY Digital8 camera (which is backwards compatible with Video8) to output DV (using it’s inbuilt ADC). The conversion process worked well for the video but the audio came through jumpy/distorted in places. This left me with a problem, was it the Camera or the tape ? So I bought another Samsung Video8 camera (just analog) and using the SONY’s passthrough feature output from the Samsung (Composite & mono audio) into the SONY which output the DV. Much to my delight it worked ! The audio was clear.

    Result

    DV01.avi - Good Video / Crap Audio
    DV02.avi - Crap Video / Good Audio

    Ok so obviously what I would like to do is take the video track from DV01 and the audio track from DV02 and join/mux ? them WITHOUT re-encoding.

    Problem 1 : They have different start times so just copying over the audio track will result it not being in sync.

    After some googling I found you can use ffmpeg to take care of this :

    Firstly here is the Video info using : ffmpeg -i DV01.avi

    Input #0, avi, from 'DV01.avi':
     Duration: 02:53:06.68, start: 0.000000, bitrate: 28878 kb/s
       Stream #0:0: Video: dvvideo, yuv420p, 720x576 [SAR 16:15 DAR 4:3], 25000 kb/s, 25 fps, 25 tbr, 25 tbn, 25 tbc
       Stream #0:1: Audio: pcm_s16le, 32000 Hz, stereo, s16, 1024 kb/s
       Stream #0:2: Audio: pcm_s16le, 32000 Hz, stereo, s16, 1024 kb/s

    Muxing :

    ffmpeg -itsoffset 4 -i DV01.avi -i DV02.avi -map 0:v -map 1:a -c copy output.avi

    In the example above I am delaying the start of the video 4 seconds (I think ?) This is just an example I haven’t actually tried it as the file sizes are 40Gb !

    So my QUESTION is :

    Given the background above what would be the best way to join/mux the two streams together (without re-encoding) with the audio being in sync. Given that syncing the audio to the video may need millisecond tweaking I don’t believe trial and error is a good idea (I don’t want to tweak it by 10ms then rinse and repeat for a 40Gb file) ?

    I just had a thought, could I say create a 10 second clip (from the start) of each video and use them to find the reference/sync start point then use that when muxing the 40Gb versions ?

    Anyway, you get the idea. Looking for ways to solve this problem. Thanks !