Recherche avancée

Médias (91)

Autres articles (72)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Encodage et transformation en formats lisibles sur Internet

    10 avril 2011

    MediaSPIP transforme et ré-encode les documents mis en ligne afin de les rendre lisibles sur Internet et automatiquement utilisables sans intervention du créateur de contenu.
    Les vidéos sont automatiquement encodées dans les formats supportés par HTML5 : MP4, Ogv et WebM. La version "MP4" est également utilisée pour le lecteur flash de secours nécessaire aux anciens navigateurs.
    Les documents audios sont également ré-encodés dans les deux formats utilisables par HTML5 :MP3 et Ogg. La version "MP3" (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

Sur d’autres sites (3568)

  • FFmpeg + iOS + lossy cellular connections

    9 novembre 2014, par Moss

    I am able to play an RTMP audio + video real-time stream on iOS with FFmpeg. Works fantastic when everything is on a solid WiFi connection.

    When I switch to a cellular connection (great signal strength and LTE/4G), av_read_frame() will intermittently block for an unacceptable amount of time. From what I can tell, it’s not that the cellular data connection just dropped, because I can reconnect immediately and start downloading more packets. In some cases, I’ve clocked 30+ seconds of hang time before it returns the next frame. When the next frame finally comes in, my real-time video stream is permanently delayed by the amount of time that av_read_frame() blocked.

    I attempted a work-around by using the AVIOInterruptCB interrupt callback to abort av_read_frame() if the function takes longer than 1 second to return. Here’s what that code looks like :

    - (void)readPackets {
       // Make sure FFmpeg calls our interrupt periodically
       _context->interrupt_callback.callback = interrupt_cb;
       _context->interrupt_callback.opaque = self;

       dispatch_async(_readPacketQueue, ^(void) {
           int err;

           while(true) {
               _readFrameTimeStamp = [[NSDate date] timeIntervalSince1970];
               err = av_read_frame(_context, &packet);
               _readFrameTimeStamp = 0;

               if(err) {
                   // Error - Reconnect the entire stream from scratch, taking 5-10 seconds
                   // And we know when av_read_frame() was aborted
                   // because its error code is -1414092869 ("EXIT")
               }
               else {
                   // Play this audio or video packet
               }
           }
      });
    }

    /**
    * Interrupt
    * @return 1 to abort the current operation
    */
    static int interrupt_cb(void *decoder) {
       if(decoder) {
           if(_readFrameTimeStamp != 0) {
               if([[NSDate date] timeIntervalSince1970] - _readFrameTimeStamp > 1) {
                   // Abort av_read_frame(), it's taking longer than 1 second
                   return 1;
               }
           }
       }
    }

    This definitely aborts av_read_frame() after 1 second, but unfortunately after I do this, future attempts to call av_read_frame() result in EIO errors (-5), which indicates that the connection has been severed.

    As a result, I am forced to fully reconnect the viewer, which takes 5-10 seconds. (avformat_open_input() takes 3-4 seconds, and then find the stream info again takes 2-3 seconds, and then start reading frames).

    The 5-10 second delay to fully reconnect is much better than waiting more than 10 seconds for av_read_frame() to unblock, and it’s much better than the real-time stream being delayed by a significant amount. But it’s much worse than being able to retry av_read_frame() immediately.

    From a cellular user’s perspective, their video locks up intermittently for 5-10 seconds while we reconnect the stream in the background from scratch, which isn’t a good user experience.

    What strategies are there to better way to manage av_read_frame() on a lossy cellular connection ?
    (Or strategies to improve the reconnect time ?)

  • AAC : Fix M/S stereo encoding

    3 mars 2015, par Claudio Freire
    AAC : Fix M/S stereo encoding
    

    This patch fixes a pointer arithmetic bug in adjust_frame_information that resulted in heavily corrupted audio when using M/S encoding. Also, a backup copy of untransformed coefficients has to be kept around or attempts at re-processing the frame (which happens when hevavily overspending bits during transients) will result in re-encoding of the coefficients and subsequent corruption of the resulting stream.

    A/B testing shows the bug as corrected, but still cannot prove that M/S coding is a win at least in numbers. Limited listening tests do show improvement on M/S encoded samples in lower bitrates, but they’re hidden among the other artifacts that remain to be corrected in the encoder.

    Some of the regressions flagged in the report do show poor stereo image (but not buggy), so M/S encoding is clearly not good enough yet to be defaulted to auto.

    In numbers, Patched against Unpatched, stereo_mode auto :

    Files : 114
    Bitrates : 6
    Tests : 683

    Serious Regressions : 0 (0%)
    Regressions : 0 (0%)
    Improvements : 227 (33%)
    Big improvements : 92 (13%)
    Worst regression - mybloodrusts.wv - 256k
    - StdDev : 28.61 pSNR : -0.43 maxdiff : 1372.00
    Best improvement - 60.wv - 384k
    - StdDev : -369.57 pSNR : 45.02 maxdiff : -13322.00
    Average - StdDev : -80.56 pSNR : 2.49 maxdiff : -8858.00

    Patched against Unpatched stereo_mode ms_off shows no difference.

    Patched stereo_mode auto vs Unpatched stereo_mode ms_off shows a small average improvement, just not too significant :

    Serious Regressions : 0 (0%)
    Regressions : 10 (1%)
    Improvements : 45 (6%)
    Big improvements : 2 (0%)
    Worst regression - Illinois.wv - 256k
    - StdDev : 33.20 pSNR : -2.03 maxdiff : 477.00
    Best improvement - song_of_circomstances.flac - 384k
    - StdDev : -3.97 pSNR : 7.61 maxdiff : -826.00
    Average - StdDev : -10.25 pSNR : 0.20 maxdiff : -281.00

    Signed-off-by : Michael Niedermayer <michaelni@gmx.at>

    • [DH] libavcodec/aac.h
    • [DH] libavcodec/aaccoder.c
    • [DH] libavcodec/aacenc.c
  • Combining audio and video streams in ffmpeg in nodejs

    10 juillet 2015, par LouisK

    This is a similar question to Merge WAV audio and WebM video but I’m attempting to deal with two streams instead of static files. It’s kind of a multi-part question.

    It may be as much an ffmpeg question as a Node.js question (or more). I’ve never used ffmpeg before and haven’t done a ton of streaming/piping.

    I’m using Mauz-Khan’s MediaStreamCapture (an expansion on RecordRTC) in conjunction with Socket.io-stream to stream media from the browser to the server. From webkit this delivers independent streams for audio and video which I’d like to combine in a single transcoding pass.

    Looking at FFmpeg’s docs it looks like it’s 100% capable of using and merging these streams simultaneously.

    Looking at these NPM modules :

    https://www.npmjs.com/package/fluent-ffmpeg and https://www.npmjs.com/package/stream-transcoder

    Fluent-ffmpeg’s docs suggest it can take a stream and a bunch of static files as inputs, while stream-transcoder only takes a single stream.

    I see this as a use case that just wasn’t built in (or needed) by the module developers, but wanted to see if anyone had used either (or another module) to accomplish this before I get on with forking and trying to add the functionality ?

    Looking at the source of stream-transcoder it’s clearly setup to only use one input, but may not be that hard to add a second to. From the ffmpeg perspective, is adding a second input as simple as adding an extra source stream and an extra ’-i’ in the command ? (I think yes, but can foresee a lot of time burned trying to figure this out through Node).

    This section of stream-transcoder is where the work is really being done :

    /* Spawns child and sets up piping */
    Transcoder.prototype._exec = function(a) {

       var self = this;

       if ('string' == typeof this.source) a = [ '-i', this.source ].concat(a);
       else a = [ '-i', '-' ].concat(a);

       var child = spawn(FFMPEG_BIN_PATH, a, {
           cwd: os.tmpdir()
       });
       this._parseMetadata(child);

       child.stdin.on('error', function(err) {
           try {
               if ('object' == typeof self.source) self.source.unpipe(this.stdin);
           } catch (e) {
               // Do nothing
           }
       });

       child.on('exit', function(code) {
           if (!code) self.emit('finish');
           else self.emit('error', new Error('FFmpeg error: ' + self.lastErrorLine));
       });

       if ('object' == typeof this.source) this.source.pipe(child.stdin);

       return child;

    };

    I’m not quite experienced enough with piping and child processes to see off the bat where I’d add the second source - could I simply do something along the lines of this.source2.pipe(child.stdin) ? How would I go about getting the 2nd stream into the FFmpeg child process ?