Recherche avancée

Médias (1)

Mot : - Tags -/Christian Nold

Autres articles (75)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (4425)

  • Is it possible to re-translate RTMP stream without losing speed ? [closed]

    3 août 2024, par Lunavod

    I've been working on a stream proxy - the idea is that instead of streaming directly to Twitch, OBS streams to a local RTMP server running on the same machine. The server decodes flv from the rtmp stream into rawvideo using ffmpeg, modifies pixels, and encodes back into flv, streaming the result to twitch. Again, using ffmpeg.

    


    However, I was not able to make this setup work reliably - I always run into buffering issues on Twitch. Even if ffmpeg shows a stable bitrate and 60fps, twitch slowly loses buffer size, then pauses to buffer, and then slowly loses buffer again... This results in endlessly growing delays and frequent pauses.

    


    I simplified this setup, removing the rawvideo part together with frame modification. A simplified setup accepts the rtmp stream, and dumps it into FFmpeg, which sends it to Twitch with minimal overhead (I hope).
But even with this setup, Twitch still increases latency, although considerably slower.

    


    The connection between rtmp server and ffmpeg is done with TCP sockets.
I tried using stdin, but it works even worse.
I also tried using windows named pipes but ran into a bottleneck - writing rawvideo from ffmpeg and reading it from script worked fine, as well as writing from a script and reading from ffmpeg. However, running both simultaneously in two different pipes slowed down.

    


    Initially, all of this was written in python, but I also tried using go, hoping that rtmp server realisation in python was the problem.

    


    Am I missing something fundamental here ? Is this idea possible at all ?

    


  • How can I dynamically update metadata in audio output with libav so that updates appear in MPV ?

    9 mars, par Teddy

    My ultimate goal is to proxy an internet radio station and programmatically add metadata to it during the stream that can be displayed and updated in MPV, the media player playing the audio.

    


    The metadata I would like to add is primarily the song title, but ideally additional information including artist, composer, and album.

    


    I envision running the proxy program like this :

    


    $ curl https://example.com/stream.mp3 | ./proxy_add_metadata | mpv -


    


    or maybe this :

    


    $ ./proxy_add_metadata &amp;&#xA;#=> <output>&#xA;$ mpv <output>&#xA;</output></output>

    &#xA;

    How could I make song metadata updates dynamically over time using libav ?

    &#xA;

    I’m using FFmpeg version 7.1.

    &#xA;

    I first tried changing the metadata dictionary shortly before writing a frame with a few different container formats :

    &#xA;

    av_dict_set(&amp;output_ctx->metadata, "title", "Title 1", 0);&#xA;/* [...] */&#xA;av_interleaved_write_frame(output_ctx, packet);&#xA;

    &#xA;

    Setting metadata with av_dict_set(&amp;output_ctx->metadata, "title", "Title 1", 0); only appears to work when done before writing the output header.

    &#xA;

    My next idea was to try setting metadata in AVPacket side data, but I’m unclear which container formats support this for the kind of metadata I’m working with.

    &#xA;

    I’m open to any output media container format or FFmpeg-originated network stream.

    &#xA;

    It’s unclear to me whether the metadata should be written to a separate stream within the media container or whether it should be written as side data in the output packets.

    &#xA;

    If what I’m trying to do is impossible, please explain why.

    &#xA;

    What I have so far reads audio from standard input and writes it to standard output. The input audio can be assumed to be in MP3 format. Non-working sections for metadata updates are commented out.

    &#xA;

    /* proxy_add_metadata.c */&#xA;#include &#xA;&#xA;#include <libavformat></libavformat>avformat.h>&#xA;#include <libavcodec></libavcodec>avcodec.h>&#xA;&#xA;int main() {&#xA;    int _err;&#xA;&#xA;    /* MP3 input */&#xA;    AVFormatContext *input_ctx = avformat_alloc_context();&#xA;    _err = avformat_open_input(&amp;input_ctx, "pipe:0", NULL, NULL);&#xA;    _err = avformat_find_stream_info(input_ctx, NULL);&#xA;&#xA;    AVFormatContext *output_ctx;&#xA;    _err = avformat_alloc_output_context2(&amp;output_ctx, NULL, "matroska", "pipe:1");&#xA;&#xA;    AVStream *input_stream = input_ctx->streams[0];&#xA;    AVStream *output_stream = avformat_new_stream(output_ctx, NULL);&#xA;&#xA;    _err = avcodec_parameters_copy(output_stream->codecpar, input_stream->codecpar);&#xA;    _err = avio_open(&amp;output_ctx->pb, "pipe:1", AVIO_FLAG_WRITE);&#xA;&#xA;    _err = avformat_write_header(output_ctx, NULL);&#xA;&#xA;    AVPacket *packet = av_packet_alloc();&#xA;&#xA;    /* Set up packet side data. */&#xA;    /*&#xA;    AVDictionary *packet_side_data_dict;&#xA;    av_dict_set(&amp;packet_side_data_dict, "title", "Title 1", 0);&#xA;&#xA;    size_t packet_side_data_size = 0;&#xA;    uint8_t *packet_side_data = av_packet_pack_dictionary(&#xA;        packet_side_data_dict,&#xA;        &amp;packet_side_data_size&#xA;    );&#xA;    av_dict_free(&amp;packet_side_data_dict);&#xA;    */&#xA;&#xA;    while (1) {&#xA;        _err = av_read_frame(input_ctx, packet);&#xA;        if (_err &lt; 0) {&#xA;            break;&#xA;        }&#xA;&#xA;        /* Can metadata updates be made here? */&#xA;&#xA;        /* Option 1: Attempt to write metadata to the container. */&#xA;        /*&#xA;        _err = av_dict_set(&amp;output_ctx->metadata, "title", "Title 1", 0);&#xA;        if (_err &lt; 0) {&#xA;            fprintf(stderr, "error: can&#x27;t set metadata title in stream: %s\n", av_err2str(_err));&#xA;            break;&#xA;        }&#xA;        */&#xA;&#xA;        /* Option 2: Attempt to write metadata to packet side data. */&#xA;        /*&#xA;        _err = av_packet_add_side_data(&#xA;            packet,&#xA;            AV_PKT_DATA_METADATA_UPDATE,&#xA;            packet_side_data,&#xA;            packet_side_data_size&#xA;        );&#xA;        if (_err &lt; 0) {&#xA;            fprintf(stderr, "error: can&#x27;t add side data to packet: %s\n", av_err2str(_err));&#xA;            break;&#xA;        }&#xA;        */&#xA;&#xA;        AVStream *input_stream = input_ctx->streams[packet->stream_index];&#xA;        AVStream *output_stream = output_ctx->streams[packet->stream_index];&#xA;&#xA;        av_packet_rescale_ts(packet, input_stream->time_base, output_stream->time_base);&#xA;        packet->pos = -1;&#xA;&#xA;        _err = av_interleaved_write_frame(output_ctx, packet);&#xA;        if (_err &lt; 0) {&#xA;            fprintf(stderr, "error: packet write: %s\n", av_err2str(_err));&#xA;            break;&#xA;        }&#xA;    }&#xA;&#xA;    av_write_trailer(output_ctx);&#xA;&#xA;    av_packet_free_side_data(packet);&#xA;    av_packet_free(&amp;packet);&#xA;&#xA;    avio_closep(&amp;output_ctx->pb);&#xA;    avformat_free_context(output_ctx);&#xA;&#xA;    avformat_close_input(&amp;input_ctx);&#xA;&#xA;    return 0;&#xA;}&#xA;

    &#xA;

    cc \&#xA;    -Wall \&#xA;    -g \&#xA;    -I/.../ffmpeg7/include \&#xA;    -o proxy_add_metadata \&#xA;    proxy_add_metadata.c \&#xA;    -L/.../ffmpeg7/lib -lavformat -lavcodec&#xA;

    &#xA;

    $ &lt; sample.mp3 ./proxy_add_metadata | mpv -&#xA;

    &#xA;

  • Introducing WebM, an open web media project

    20 mai 2010, par noreply@blogger.com (christosap)

    A key factor in the web’s success is that its core technologies such as HTML, HTTP, TCP/IP, etc. are open and freely implementable. Though video is also now core to the web experience, there is unfortunately no open and free video format that is on par with the leading commercial choices. To that end, we are excited to introduce WebM, a broadly-backed community effort to develop a world-class media format for the open web.

    WebM includes :

    • VP8, a high-quality video codec we are releasing today under a BSD-style, royalty-free license
    • Vorbis, an already open source and broadly implemented audio codec
    • a container format based on a subset of the Matroska media container

    The team that created VP8 have been pioneers in video codec development for over a decade. VP8 delivers high quality video while efficiently adapting to the varying processing and bandwidth conditions found on today’s broad range of web-connected devices. VP8’s efficient bandwidth usage will mean lower serving costs for content publishers and high quality video for end-users. The codec’s relative simplicity makes it easy to integrate into existing environments and requires less manual tuning to produce high quality results. These existing attributes and the rapid innovation we expect through the open-development process make VP8 well suited for the unique requirements of video on the web.

    A developer preview of WebM and VP8, including source code, specs, and encoding tools is available today at www.webmproject.org.

    We want to thank the many industry leaders and web community members who are collaborating on the development of WebM and integrating it into their products. Check out what Mozilla, Opera, Google Chrome, Adobe, and many others below have to say about the importance of WebM to the future of web video.


    Telestream