Recherche avancée

Médias (91)

Autres articles (55)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

Sur d’autres sites (3633)

  • avcodec/refstruct : Add simple API for refcounted objects

    4 août 2022, par Andreas Rheinhardt
    avcodec/refstruct : Add simple API for refcounted objects
    

    For now, this API is supposed to replace all the internal uses
    of reference counted objects in libavcodec ; "internal" here
    means that the object is created in libavcodec and is never
    put directly in the hands of anyone outside of it.

    It is intended to be made public eventually, but for now
    I enjoy the ability to modify it freely.

    Several shortcomings of the AVBuffer API motivated this API :
    a) The unnecessary allocations (and ensuing error checks)
    when using the API. Besides the need for runtime checks it
    imposes upon the developer the burden of thinking through
    what happens in case an error happens. Furthermore, these
    error paths are typically not covered by FATE.
    b) The AVBuffer API is designed with buffers and not with
    objects in mind : The type for the actual buffers used
    is uint8_t* ; it pretends to be able to make buffers
    writable, but this is wrong in case the buffer is not a POD.
    Another instance of this thinking is the lack of a reset
    callback in the AVBufferPool API.
    c) The AVBuffer API incurs unnecessary indirections by
    going through the AVBufferRef.data pointer. In case the user
    tries to avoid this indirection and stores a pointer to
    AVBuffer.data separately (which also allows to use the correct
    type), the user has to keep these two pointers in sync
    in case they can change (and in any case has two pointers
    occupying space in the containing context). See the following
    commit using this API for H.264 parameter sets for an example
    of the removal of such syncing code as well as the casts
    involved in the parts where only the AVBufferRef* pointer
    was stored.
    d) Given that the AVBuffer API allows custom allocators,
    creating refcounted objects with dedicated free functions
    often involves a lot of boilerplate like this :
    obj = av_mallocz(sizeof(*obj)) ;
    ref = av_buffer_create((uint8_t*)obj, sizeof(*obj), free_func, opaque, 0) ;
    if (!ref)
    av_free(obj) ;
    return AVERROR(ENOMEM) ;

    (There is also a corresponding av_free() at the end of free_func().)
    This is now just
    obj = ff_refstruct_alloc_ext(sizeof(*obj), 0, opaque, free_func) ;
    if (!obj)
    return AVERROR(ENOMEM) ;
    See the subsequent patch for the framepool (i.e. get_buffer.c)
    for an example.

    This API does things differently ; it is designed to be lightweight*
    as well as geared to the common case where the allocator of the
    underlying object does not matter as long as it is big enough and
    suitably aligned. This allows to allocate the user data together
    with the API's bookkeeping data which avoids an allocation as well
    as the need for separate pointers to the user data and the API's
    bookkeeping data. This entails that the actual allocation of the
    object is performed by RefStruct, not the user. This is responsible
    for avoiding the boilerplate code mentioned in d).

    As a downside, custom allocators are not supported, but it will
    become apparent in subsequent commits that there are enough
    usecases to make it worthwhile.

    Another advantage of this API is that one only needs to include
    the relevant header if one uses the API and not when one includes
    the header or some other component that uses it. This is because there
    is no RefStruct type analog of AVBufferRef. This brings with it
    one further downside : It is not apparent from the pointer itself
    whether the underlying object is managed by the RefStruct API
    or whether this pointer is a reference to it (or merely a pointer
    to it).

    Finally, this API supports const-qualified opaque pointees ;
    this will allow to avoid casting const away by the CBS code.

    * : Basically the only exception to the you-only-pay-for-what-you-use
    rule is that it always uses atomics for the refcount.

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>

    • [DH] libavcodec/Makefile
    • [DH] libavcodec/refstruct.c
    • [DH] libavcodec/refstruct.h
  • FFmpeg trim filter fails to use sexagisimal time specification and the output stream is empty. Is it a bug and is there a fix ?

    29 novembre 2020, par Link-akro

    With ffmpeg the trim filter (or its audio variant atrim) malfunctions whenever i try to write a sexagesimal time specification for the boundaries or duration parameters of the filter : the result is always an empty stream.

    &#xA;

    According to the documentation i already linked, which was generated on November 26, 2020 so far, the time duration specification should be supported.

    &#xA;

    &#xA;

    start, end, and duration are expressed as time duration specifications

    &#xA;

    &#xA;

    Quote of the spec.

    &#xA;

    &#xA;

    [-][HH :]MM:SS[.m...]

    &#xA;

    &#xA;

    &#xA;

    [-]S+[.m...][s|ms|us]

    &#xA;

    &#xA;

    I am working in a Windows_10-64bit and scripting in its CMD.exe command-line, should it matter.

    &#xA;

    Here is an instance of trimming a video stream out of a media file with hardcoded value for simplicity. The audio is retained unless i use atrim as well.&#xA;We may or may not append the setpts filter as recommended in the documentation henceforth setpts=PTS-STARTPTS if we want to shift the stream to start at the beginning of the cut range.

    &#xA;

        ffmpeg -i sample-counter.mp4 -vf "trim=start=start=&#x27;1:2&#x27;:end=&#x27;1:5&#x27;" sample-counter-trimmed.mp4&#xA;

    &#xA;

    If i use decimal it works as intended.

    &#xA;

        ffmpeg -i sample-counter.mp4 -vf "trim=start=start=&#x27;2&#x27;:end=&#x27;5&#x27;" sample-counter-trimmed.mp4&#xA;

    &#xA;

    This is the banner of my ffmpeg build. We may see it is the latest build by gyan.dev at the moment i post this.

    &#xA;

        ffmpeg version 4.3.1-2020-11-19-full_build-www.gyan.dev Copyright (c) 2000-2020 the FFmpeg developers&#xA;      built with gcc 10.2.0 (Rev5, Built by MSYS2 project)&#xA;      configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-lzma --enable-libsnappy --enable-zlib --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libdav1d --enable-libzvbi --enable-librav1e --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libaom --enable-libopenjpeg --enable-libvpx --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint&#xA;

    &#xA;

    The incorrect stream is always reported as 0kB weight in the output of ffmpeg. I confirmed with ffprobe Movie_Countdown-trim-2-5.mov -show_streams -show_entries format=duration that the incorrect stream is duration zero.

    &#xA;

    Should i report it as a bug ? Is there some correction or workaround ?

    &#xA;

    I would rather a solution with the trim filter itself, but if not possible a CMD batch scripting.&#xA;We should not need a different filter like select or seeking option like there are tutorials and questions/answers everywhere already. Scripting would be straight-forward in a proper *nix shell+distrib and taught everywhere so it is not worth caring while CMD answers on the other hand are rare so a scripting workaround for CMD would have some microcosmic worth since making one is considerably less straight-forward than a shell with modern mathematics and parsing abilities built-in or packaged.

    &#xA;

  • C# EmbedIO server : Only the first request is working trying to live stream from FFMPEG

    7 octobre 2017, par Connum

    I’m trying to build an HTTP server that will stream dynamic video/audio in the TransportStream format via FFMPEG. I found EmbedIO and it looks like a lightweight yet flexible base for this.

    So, I looked at the module examples and built a very basic module that doesn’t yet handle the request URL at all but responds with the same stream for any request, just to see whether it’s working as intended :

    namespace TSserver
    {
       using Unosquare.Swan;
       using Unosquare.Swan.Formatters;
       using System;
       using System.Collections.Generic;
       using System.IO;
       using System.Threading;
       using System.Threading.Tasks;
    #if NET46
       using System.Net;
    #else
       using Unosquare.Net;
       using Unosquare.Labs.EmbedIO;
       using Unosquare.Labs.EmbedIO.Constants;
       using System.Diagnostics;
    #endif

       /// <summary>
       /// TSserver Module
       /// </summary>
       public class TSserverModule : WebModuleBase
       {
           /// <summary>
           /// Initializes a new instance of the <see cref="TSserverModule"></see> class.
           /// </summary>
           /// The base path.
           /// The json path.
           public TSserverModule()
           {
               AddHandler(ModuleMap.AnyPath, HttpVerbs.Any, HandleRequest);
           }

           /// <summary>
           /// Gets the Module's name
           /// </summary>
           public override string Name => nameof(TSserverModule).Humanize();


           /// <summary>
           /// Handles the request.
           /// </summary>
           /// The context.
           /// The cancellation token.
           /// <returns></returns>
           private Task<bool> HandleRequest(HttpListenerContext context, CancellationToken ct)
           {
               var path = context.RequestPath();
               var verb = context.RequestVerb();

               System.Net.HttpStatusCode statusCode;
               context.Response.SendChunked = true;
               //context.Response.AddHeader("Last-Modified", File.GetLastWriteTime(filename).ToString("r"));
               context.Response.ContentType = "video/mp2t";

               try
               {
                   var ffmpeg = new Process
                   {
                       StartInfo = new ProcessStartInfo
                       {
                           FileName = "ffmpeg.exe",
                           Arguments = "-re -loop 1 -i \"./default.png\" -i \"./jeopardy.mp3\" -c:v libx264 -tune stillimage -r 25 -vcodec mpeg2video -profile:v 4 -bf 2 -b:v 4000k -maxrate:v 5000k -acodec mp2 -ac 2 -ab 128k -ar 48000 -f mpegts -mpegts_original_network_id 1 -mpegts_transport_stream_id 1 -mpegts_service_id 1 -mpegts_pmt_start_pid 4096 -streamid 0:289 -streamid 1:337 -metadata service_provider=\"MYCALL\" -metadata service_name=\"My Station ID\" -y pipe:1",
                           UseShellExecute = false,
                           RedirectStandardOutput = true,
                           CreateNoWindow = true
                       }
                   };

                   ffmpeg.Start();

                   FileStream baseStream = ffmpeg.StandardOutput.BaseStream as FileStream;
                   int lastRead = 0;
                   byte[] buffer = new byte[4096];

                   do
                   {
                       lastRead = baseStream.Read(buffer, 0, buffer.Length);
                       context.Response.OutputStream.Write(buffer, 0, lastRead);
                       context.Response.OutputStream.Flush();
                   } while (lastRead > 0);

                   statusCode = System.Net.HttpStatusCode.OK;
               }
               catch (Exception e)
               {
                   statusCode = System.Net.HttpStatusCode.InternalServerError;
               }

               context.Response.StatusCode = (int)statusCode;
               context.Response.OutputStream.Flush();
               context.Response.OutputStream.Close();

               return Task.FromResult(true);
           }

       }
    }
    </bool>

    This does indeed work, when I open a connection in a browser, a TS file is offered for download, when I connect via VLC Player, I see my default.png file accompanied by the Jeopardy think music - yay ! However, if I connect a second client (player or browser) it will just load endlessly and not get anything back. Even if I close the previous connection (abort the download or stop playback), no subsequent connection will result in any response. I have to stop and start the server again in order to be able to make one single connection again.

    It seems to me that my code is blocking the server, despite being run inside a Task of its own. I’m coming from a PHP & JavaScript background, so I’m quite new to C# and threading. So this might be pretty obvious... But I hoped that EmbedIO would handle all the multitasking/threading stuff.