Recherche avancée

Médias (91)

Autres articles (64)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

  • Sélection de projets utilisant MediaSPIP

    29 avril 2011, par

    Les exemples cités ci-dessous sont des éléments représentatifs d’usages spécifiques de MediaSPIP pour certains projets.
    Vous pensez avoir un site "remarquable" réalisé avec MediaSPIP ? Faites le nous savoir ici.
    Ferme MediaSPIP @ Infini
    L’Association Infini développe des activités d’accueil, de point d’accès internet, de formation, de conduite de projets innovants dans le domaine des Technologies de l’Information et de la Communication, et l’hébergement de sites. Elle joue en la matière un rôle unique (...)

Sur d’autres sites (4999)

  • ffmpeg mkv to mp4 conversion has color tint

    25 novembre 2020, par razvan

    I am recording the screen in a lossless format to have small CPU load

    


    ffmpeg -f gdigrab -framerate 30 -i desktop -vcodec libx264rgb -crf 0 -preset ultrafast rec.mkv
    
ffprobe rec.mkv

    


    Input #0, matroska,webm, from 'vid.mkv':
  Metadata:
    ENCODER         : Lavf58.64.100
  Duration: 00:00:29.67, start: 0.000000, bitrate: 2829 kb/s
    Stream #0:0: Video: h264 (High 4:4:4 Predictive), gbrp(pc, gbr/unknown/unknown, progressive), 1920x1200, 30 fps, 30 tbr, 1k tbn, 60 tbc (default)
    Metadata:
      ENCODER         : Lavc58.112.103 libx264rgb
      DURATION        : 00:00:29.666000000


    


    then I convert/compress it in mp4

    


    ffmpeg -i rec.mkv rec.mp4
    
ffprobe rec.mp4

    


    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'vid.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.64.100
  Duration: 00:00:29.67, start: 0.000000, bitrate: 326 kb/s
    Stream #0:0(und): Video: h264 (High 4:4:4 Predictive) (avc1 / 0x31637661), gbrp(tv, gbr/unknown/unknown), 1920x1200, 248 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)


    


    but the resulting mp4 it is tinted by green and pink color (white areas are tinted green and dark areas are tinted pink)

    


    I have the same results on windows and ubuntu.
I am using latest git versions.

    


    Any idea how to properly convert this to mp4 ?

    


  • How to know the delay of frames between 2 videos, to sync an audio from video 1 to video 2 ?

    8 janvier 2021, par jaimepm

    world.

    


    I have many videos that I want to compare one-to-one to check if they are the same, and get from there the delay of frames, let's say. What I do now is opening both video files with virtualdub and checking manually at the beginning of video 1 that a given frame is at position, i.e., 4325. Then I check video 2 to see the position of the same frame, i.e., 5500. That would make a delay of +1175 frames. Then I check at the end of the video 1 another given frame, position let's say 183038. I check too the video 2 (imagine the position is 184213) and I calculate the difference, again +1175 : eureka, same video !
The frame I chose to compare aren't exactly random, it must be one that I know it is exactly one I can compare to (for example, a scene change, an explosion that appears from one frame to another, a dark frame after a lighten one...) and I always try to check for the first comparison frames within the first 10000 positions and for the second check I take at the end.
What I do next is to convert the audio from video 1 to video 2 calculating the number of ms needed, but I don't need help with that. I'd love to automatize the comparison so I just have to select video 1 and video 2, nothing else, that way I could forget forever virtualdub and save a lot of time.

    


    I'm tagging this post as powershell too because I'm making a script where at the moment I have to introduce the delay between frames (after comparing manually) myself. It would be perfect that I could add this at the beginning of the script.

    


    Thanks !

    


  • Automatically detect box/coordinates of burned-in subtitles in a video source

    8 mars 2021, par AndroidX

    In reality I'd like to detect the coordinates of the "biggest" (both in height and width) burned-in subtitle of a given video source. But in order to do this I first need to detect the box coordinates of every distinct subtitle in the sample video, and compare them to find the biggest one. I didn't know where to start about this, so the closest thing I found (sort of) was ffmpeg's bbox video filter which according to the documentation computes "the bounding box for the non-black pixels in the input frame luminance plane", based on a given luminance value :

    


    ffmpeg -i input.mkv -vf bbox=min_val=130 -f null -


    


    This gives me a line with coordinates for each input frame in the video, ex. :

    


    [Parsed_bbox_0 @ 0ab734c0] n:123 pts:62976 pts_time:4.1 x1:173 x2:1106 y1:74 y2:694 w:934 h:621 crop=934:621:173:74 drawbox=173:74:934:621


    


    The idea was to make a script and loop through the filter's output, detect the "biggest" box by comparing them all, and output its coordinates and frame number as representative of the longest subtitle.

    


    The bbox filter though can't properly detect the subtitle box even in a relatively dark video with white hardsubs. By trial and error and only for a particular video sample which I used to run my tests, the "best" result for detecting the box of any subtitle was to use a min_val of 130 (supposedly the meaningful values of min_val are in the range of 0-255, although the docs don't say anything). Using the drawbox filter with ffplay to test the coordinates reported for a particular frame, I can see that it correctly detects only the bottom/left/right boundary of the subtitle, presumably because the outline of the globe in the image below is equally bright :

    


    enter image description here

    


    Raising min_val to 230 slightly breaks the previously correct boundaries at the bottom/left/right side :

    


    enter image description here

    


    And raising it to 240 gives me a weird result :

    


    enter image description here

    


    However even if I was able to achieve a perfect outcome with the bbox filter, this technique wouldn't be bulletproof for obvious reasons (the min_val should be arbitrarily chosen, the burned-in subtitles can be of different color, the image behind the subtitles can be equally or even more bright depending the video source, etc.).

    


    So if possible I would like to know :

    


      

    1. Is there a filter or another technique I can use with ffmpeg to do what I want
    2. 


    3. Is there perhaps another CLI tool or programming library to achieve this
    4. 


    5. Any hint that could help (perhaps I'm looking at the problem the wrong way)
    6.