Recherche avancée

Médias (0)

Mot : - Tags -/auteurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (20)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Submit enhancements and plugins

    13 avril 2011

    If you have developed a new extension to add one or more useful features to MediaSPIP, let us know and its integration into the core MedisSPIP functionality will be considered.
    You can use the development discussion list to request for help with creating a plugin. As MediaSPIP is based on SPIP - or you can use the SPIP discussion list SPIP-Zone.

Sur d’autres sites (5765)

  • Is it possible to combine audio and video from ffmpeg-python without writing to files first ?

    22 janvier 2021, par nullUser

    I'm using the ffmpeg-python library.

    


    I have used the example code : https://github.com/kkroening/ffmpeg-python/tree/master/examples to asynchronously read in and process audio and video streams. The processing is custom and not something a built-in ffmpeg command can achieve (imagine something like tensorflow deep dreaming on both the audio and video). I then want to recombine the audio and video streams that I have created. Currently, the only way I can see to do it is to write both streams out to separate files (as is done e.g. in this answer : How to combine The video and audio files in ffmpeg-python), then use ffmpeg to combine them afterwards. This has the major disadvantage that the result cannot be streamed, i.e. the audio and video must be completely done processing before you can start playing the combined audio/video. Is there any way to combine them without going to files as an intermediate step ?

    


    Technically, the fact that the streams were initially read in from ffmpeg is irrelevant. You may as well assume that I'm in the following situation :

    


    def audio_stream():
    for i in range(10):
        yield bytes(44100 * 2 * 4) # one second of audio 44.1k sample rate, 2 channel, s32le format

def video_stream():
    for i in range(10):
        yield bytes(60 * 1080 * 1920 * 3) # one second of video 60 fps 1920x1080 rgb24 format

# how to write both streams of bytes to file without writing each one separately to a file first?


    


    I would like to use ffmpeg.concat, but this requires ffmpeg.inputs, which only accept filenames as inputs. Is there any other way ? Here are the docs : https://kkroening.github.io/ffmpeg-python/.

    


  • avfilter/ccfifo : Properly handle CEA-708 captions through framerate conversion

    5 mai 2023, par Devin Heitmueller
    avfilter/ccfifo : Properly handle CEA-708 captions through framerate conversion
    

    When transcoding video that contains 708 closed captions, the
    caption data is tied to the frames as side data. Simply dropping
    or adding frames to change the framerate will result in loss of
    data, so the caption data needs to be preserved and reformatted.

    For example, without this patch converting 720p59 to 1080i59
    would result in loss of 50% of the caption bytes, resulting in
    garbled 608 captions and 708 probably wouldn't render at all.
    Further, the frames that are there will have an illegal
    cc_count for the target framerate, so some decoders may ignore
    the packets entirely.

    Extract the 608 and 708 tuples and insert them onto queues. Then
    after dropping/adding frames, re-write the tuples back into the
    resulting frames at the appropriate rate given the target
    framerate. This includes both having the correct cc_count as
    well as clocking out the 608 pairs at the appropriate rate.

    Thanks to Lance Wang <lance.lmwang@gmail.com>, Anton
    Khirnov <anton@khirnov.net>, and Michael Niedermayer <michael@niedermayer.cc>
    for providing review/feedback.

    Signed-off-by : Devin Heitmueller <dheitmueller@ltnglobal.com>
    Signed-off-by : Limin Wang <lance.lmwang@gmail.com>

    • [DH] libavfilter/Makefile
    • [DH] libavfilter/ccfifo.c
    • [DH] libavfilter/ccfifo.h
  • Encode video of powerpoint presentation for HTML5 playback

    17 avril 2013, par user2291446

    We have a number of powerpoint presentations that have been converted to 16:9
    aspect ratio and then converted into mp4 "master videos" with an "apple TV" 720p
    profile. These powerpoint presentations are voice annotated. So in essence, we
    show a slide and then let the annotation sound play for a while, then go to the
    next slide, and so on. The resulting mp4 master video is somewhere around 900MB
    on average.

    Here is an example of the master video

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.mp4' :
    Metadata :
        major_brand : isom
        minor_version : 512
        compatible_brands : isomiso2avc1mp41
        creation_time : 1970-01-01 00:00:00
        encoder : Lavf52.104.0
      Duration : 02:00:57.65, start : 0.000000, bitrate : 970 kb/s
        Stream #0:0(und) : Video : h264 (Main) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 836 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc
        Metadata :
          creation_time : 1970-01-01 00:00:00
          handler_name : VideoHandler
        Stream #0:1(und) : Audio : aac (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 127 kb/s
        Metadata :
          creation_time : 1970-01-01 00:00:00
          handler_name : SoundHandler
    

    We are trying to get these presentations to play on the web on as many
    devices/browsers as possible including some that don't do HTML5 (IE7/IE8). We
    have narrowed down our player of choice which is mediaElement and have extracted
    some "cue points" from the powerpoint presentation that mark where the slides
    are changing. We have also captured thumbnails for those cuepoints such that we
    now have a nice list of thumbnails for each slide and an associated cuepoint in
    the video where the particular slide begins.

    Here comes the problem...due to the large size of the master video it is not
    practical for us to use the master video with our mediaElement player. We do
    need to transcode the master video to mp4 and ogv in order to get decent
    device/browser coverage.

    We do not seem to be able to find a suitable transcoding strategy to reduce the
    size of the video. We have played with numerous ffmpeg settings and were able to
    reduce the size but when we do so we compromise the ability to jump to specific
    cue points.

    It works well for browsers that do HTML5 video natively (Chrome and Firefox) but
    not for the flash fallback of mediaElement (IE7/IE8) which uses the mp4 file and
    seemingly is very tied to the number and frequency of key frames in the video in
    order to allow for clean seeking and skipping using the cue points.

    Seeing that we are talking about a video that has only slides (practically 90
    static images per presentation) and some sound we imagine it must be possible to
    transcode as such that the keyframes fall at the cue points or near the
    cuepoints, and that the size of the video could be drastically reduced while
    still allowing for smooth seeking and skipping.