Recherche avancée

Médias (2)

Mot : - Tags -/kml

Autres articles (39)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (6574)

  • FFmpeg encode an applied filter frame

    10 août 2017, par tqn

    I’m coding an app that crop video to square. I use native lib from ffmpeg like libavfilter, libavformat, ... After apply a filter, my video is cropped to correct size, but when encode and write to output, it seem like it cropped and zoom out everything. Here are the first frame that I extracted to jpeg to debug.

    Original Frame

    Original Frame

    Cropped Frame

    Cropped Frame

    enter image description here

    Encoded cropped Frame

    I tried both crop function like av_buffersink_get_frame(buffersink_ctx, pFrameSquare) and av_picture_crop((AVPicture *) pFrameSquare, (AVPicture *) pFrame, pic_format, 0, 0) but the output are the same. So what happened ?

  • How can I programmatically write and read random video watermarks ?

    13 novembre 2017, par GreenTriangle

    I spent a few minutes trying to think of a clearer way to word my title, but I couldn’t manage it, sorry.

    I want to essentially canary trap video files : I am (hypothetically, this is not real but a personal exercise) offering them up to 5,000 different people, and if one gets leaked, I want to know who leaked it. Metadata is too easily emoved, so what I’d like to do is add a random and subtle watermark to each file, and store information about that in a database.

    For example : on Joe Smith’s copy, a 10x10 pixel 80% transparent red square in the upper left corner for 5 frames. On Diane Brown’s copy, a full-width 5-pixel 90% transparent black bar on the bottom edge for 15 frames. Then, if I find a leaked copy, I could check it against the database.

    I know this still isn’t foolproof : cropping would break co-ordinates, hue/brightness transforms would break colour reading, cutting time would break timestamps. But if I did want to do this anyway, what would be a good strategy for it ?

    My idea was to generate PNG overlays randomly, split the video into parts with mkvtoolnix/ffmpeg, re-encode the middle part with ffmpeg + overlay filter, and then rejoin them. But is this silly when there’s a "proper" way to do it ? And what would I be doing to read the watermarks, which I can’t even really conceive of ?

  • Fallback input for ffmpeg

    22 septembre 2018, par Daniel Cantarin

    I’m doing some transcoding from a third-party remote input stream that I do not control.

    This input stream has errors from time to time, that I would like to mitigate before sending the stream to my transcoding pipeline, avoiding this way some possible problems in the output.

    I have several ideas regarding different problems. But the most basic scenario I would like to set up is as follows : when the stream is down, or it somehow loses some frames, I want to fill that video gap with a secondary input (like a blank screen, for example).

    For this simple task, I would like to use ffmpeg. I know it can mix, let’s say, an input stream with a fullscreen black square static image. However, I have to deal with this other condition : ffmpeg would run in the same infraestructure for the actual transcoding pipeline. That infraestructure must use its computing power for rendering the output. So, whatever ffmpeg command I end up using should use the minimum possible computing power.

    My actual problem : if I use -vcodec copy, in order to use minimum CPU, I can’t alter the original stream. But if I alter the original stream (by mixing it with some other stream), the operation uses CPU.

    My question : Is there a way to use -vcodec copy, but with a fallback input (instead of a mixed one) for when there are video gaps in the primary stream ?

    Thanks in advance.