Recherche avancée

Médias (91)

Autres articles (97)

  • Demande de création d’un canal

    12 mars 2010, par

    En fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
    Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

Sur d’autres sites (6791)

  • Reading subtitle metadata from mpeg files using ffprobe

    26 septembre 2022, par Kaydee Dunlop

    I'm using ffmpeg or to be more specific ffprobe which is part of the ffmpeg toolstack to read subtitle information from a mpeg file. Anyway, I'm facing an issue I currently don't fully understand. If I use the following command :

    


    ffprobe -of json -show_streams -show_format


    


    I get back something like this :

    


        {
        "index": 6,
        "codec_name": "mov_text",
        "codec_long_name": "MOV text",
        "codec_type": "subtitle",
        "codec_tag_string": "tx3g",
        "codec_tag": "0x67337874",
        "width": 3840,
        "height": 240,
        "id": "0x6",
        "r_frame_rate": "0/0",
        "avg_frame_rate": "0/0",
        "time_base": "1/1000",
        "start_pts": 0,
        "start_time": "0.000000",
        "duration_ts": 6706616,
        "duration": "6706.616000",
        "bit_rate": "95",
        "nb_frames": "4028",
        "extradata_size": 48,
        "disposition": {
            "default": 0,
            "dub": 0,
            "original": 0,
            "comment": 0,
            "lyrics": 0,
            "karaoke": 0,
            "forced": 0,
            "hearing_impaired": 0,
            "visual_impaired": 0,
            "clean_effects": 0,
            "attached_pic": 0,
            "timed_thumbnails": 0,
            "captions": 0,
            "descriptions": 0,
            "metadata": 0,
            "dependent": 0,
            "still_image": 0
        },
        "tags": {
            "creation_time": "2022-09-11T01:02:33.000000Z",
            "language": "eng"
        }
    },


    


    you can see, I have several options that can be set for the disposition section, I'm especially interested in "forced" and "hearing_impaired". To set the value for these options I'm trying to use a tool called "Subler" which is a tool to metarise mpeg files and their containing tracks. But for some reason, ffprobe does not seem to match with the fields Subler sets... So I'm kind stuck as I'm never really able to find out if a subtitle track is forced, hearing_impaired (SDH) etc. Is there any kind of workaround for this problem, maybe an extra option I have to set with ffprobe or so ? Is there maybe an alternative tool that can ?

    


    If you are more interested into this issue I also uploaded a test scenario which has forced subtitles, normal subtitles and SDH subtitles properly set, it also contains screenshots and the raw SRT files, which are basically not needed as the subs are already embedded into the mp4 file, but just in case I also attached them.

    


    https://drive.google.com/file/d/1ZZ32i17A33Lhpn4a5BDg033yV9PbZhtS/view?usp=sharing

    


  • avcodec/nvenc : surface allocation reduction

    25 avril 2017, par Ben Chang
    avcodec/nvenc : surface allocation reduction
    

    This patch aims to reduce the number of input/output surfaces
    NVENC allocates per session. Previous default sets allocated surfaces to 32
    (unless there is user specified param or lookahead involved). Having large
    number of surfaces consumes extra video memory (esp for higher resolution
    encoding). The patch changes the surfaces calculation for default, B-frames,
    lookahead scenario respectively.

    The other change involves surface selection. Previously, if a session
    allocates x surfaces, only x-1 surfaces are used (due to combination
    of output delay and lock toggle logic). To prevent unused surfaces,
    changing surface rotation to using predefined fifo.

    Signed-off-by : Timo Rothenpieler <timo@rothenpieler.org>

    • [DH] libavcodec/nvenc.c
    • [DH] libavcodec/nvenc.h
    • [DH] libavcodec/nvenc_h264.c
    • [DH] libavcodec/nvenc_hevc.c
  • FFMPEG : While decoding video, is possible to generate result to user's provided buffer ?

    6 février 2015, par cbel

    In ffmpeg decoding video scenario, H264 for example, typically we allocate an AVFrame and decode the compressed data, then we get the result from the member data and linesize of AVFrame. As following code :

    // input setting: data and size are a H264 data.
    AVPacket avpkt;
    av_init_packet(&amp;avpkt);
    avpkt.data = const_cast(data);
    avpkt.size = size;

    // decode video: H264 ---> YUV420
    AVFrame *picture = avcodec_alloc_frame();
    int len = avcodec_decode_video2(context, picture, &amp;got_picture, &amp;avpkt);

    We may use the result to do something other tasks, for example, using DirectX9 to render. That is, to prepare buffers(DirectX9 Textures), and copy from the result of decoding.

    D3DLOCKED_RECT lrY;
    D3DLOCKED_RECT lrU;
    D3DLOCKED_RECT lrV;
    textureY->LockRect(0, &amp;lrY, NULL, 0);
    textureU->LockRect(0, &amp;lrU, NULL, 0);
    textureV->LockRect(0, &amp;lrV, NULL, 0);

    // copy YUV420: picture->data ---> lr.pBits.
    my_copy_image_function(picture->data[0], picture->linesize[0], lrY.pBits, lrY.Pitch, width, height);
    my_copy_image_function(picture->data[1], picture->linesize[1], lrU.pBits, lrU.Pitch, width / 2, height / 2);
    my_copy_image_function(picture->data[2], picture->linesize[2], lrV.pBits, lrV.Pitch, width / 2, height / 2);

    This process is considered that 2 copy happens(ffmpeg copy result to picture->data, and then copy picture->data to DirectX9 Texture).

    My question is : is it possible to improve the process to only 1 copy ? On the other hand, can we provide buffers(pBits, the buffer of DirectX9 textures) to ffmpeg, and decode function results to buffer of DirectX9 texture, not to buffers of AVFrame ?