Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (87)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Configuration spécifique pour PHP5

    4 février 2011, par

    PHP5 est obligatoire, vous pouvez l’installer en suivant ce tutoriel spécifique.
    Il est recommandé dans un premier temps de désactiver le safe_mode, cependant, s’il est correctement configuré et que les binaires nécessaires sont accessibles, MediaSPIP devrait fonctionner correctement avec le safe_mode activé.
    Modules spécifiques
    Il est nécessaire d’installer certains modules PHP spécifiques, via le gestionnaire de paquet de votre distribution ou manuellement : php5-mysql pour la connectivité avec la (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

Sur d’autres sites (5668)

  • Combining audio and video streams in ffmpeg in nodejs

    10 juillet 2015, par LouisK

    This is a similar question to Merge WAV audio and WebM video but I’m attempting to deal with two streams instead of static files. It’s kind of a multi-part question.

    It may be as much an ffmpeg question as a Node.js question (or more). I’ve never used ffmpeg before and haven’t done a ton of streaming/piping.

    I’m using Mauz-Khan’s MediaStreamCapture (an expansion on RecordRTC) in conjunction with Socket.io-stream to stream media from the browser to the server. From webkit this delivers independent streams for audio and video which I’d like to combine in a single transcoding pass.

    Looking at FFmpeg’s docs it looks like it’s 100% capable of using and merging these streams simultaneously.

    Looking at these NPM modules :

    https://www.npmjs.com/package/fluent-ffmpeg and https://www.npmjs.com/package/stream-transcoder

    Fluent-ffmpeg’s docs suggest it can take a stream and a bunch of static files as inputs, while stream-transcoder only takes a single stream.

    I see this as a use case that just wasn’t built in (or needed) by the module developers, but wanted to see if anyone had used either (or another module) to accomplish this before I get on with forking and trying to add the functionality ?

    Looking at the source of stream-transcoder it’s clearly setup to only use one input, but may not be that hard to add a second to. From the ffmpeg perspective, is adding a second input as simple as adding an extra source stream and an extra ’-i’ in the command ? (I think yes, but can foresee a lot of time burned trying to figure this out through Node).

    This section of stream-transcoder is where the work is really being done :

    /* Spawns child and sets up piping */
    Transcoder.prototype._exec = function(a) {

       var self = this;

       if ('string' == typeof this.source) a = [ '-i', this.source ].concat(a);
       else a = [ '-i', '-' ].concat(a);

       var child = spawn(FFMPEG_BIN_PATH, a, {
           cwd: os.tmpdir()
       });
       this._parseMetadata(child);

       child.stdin.on('error', function(err) {
           try {
               if ('object' == typeof self.source) self.source.unpipe(this.stdin);
           } catch (e) {
               // Do nothing
           }
       });

       child.on('exit', function(code) {
           if (!code) self.emit('finish');
           else self.emit('error', new Error('FFmpeg error: ' + self.lastErrorLine));
       });

       if ('object' == typeof this.source) this.source.pipe(child.stdin);

       return child;

    };

    I’m not quite experienced enough with piping and child processes to see off the bat where I’d add the second source - could I simply do something along the lines of this.source2.pipe(child.stdin) ? How would I go about getting the 2nd stream into the FFmpeg child process ?

  • Using ffmpeg libraries to decode wav audio as PCM samples

    25 septembre 2016, par Lorenzo Monni

    I’m using the ffmpeg libraries to process audio files.

    I need to decode .wav audio files to make some operations having their samples in an understandable format, i.e. decimal numbers comprised between [-1,1] as a normal audio waveform.

    I have written the code for decoding and it’s apparently working well, but when I see the decoded samples it seems something in the sample numbers translation went bad. I paste here only the part of code where I translate the samples from the audio frame in PCM 16 bits :

    while(av_read_frame(pFormatCtx, &apkt)>=0) {
       if(apkt.stream_index==audioStream->index)
       {

           // Try to decode the packet into a frame
           int frameFinished = 0;
           avcodec_decode_audio4(aCodecCtx, aFrame, &frameFinished, &apkt);
           int data_size = av_samples_get_buffer_size(&plane_size, aCodecCtx->channels,
                                                           aFrame->nb_samples,
                                                           aCodecCtx->sample_fmt, 1);
           // Some frames rely on multiple packets, so we have to make sure the frame is finished before
           // we can use it
           if (frameFinished)
           {
               for(int a=0;a < plane_size/sizeof(int16_t);a++)
               {
                   fprintf(fd,"%d\n",(int16_t*)aFrame->data[a]);

               }
           }
       }

       av_free_packet(&apkt);
    // Free the packet that was allocated by av_read_frame

    }

    Additional information and issues :

    • the sample_fmt in my allocated AVCodecContext is "AV_SAMPLE_FMT_S16" so the samples numbers should be 16bit signed binaries, I guess if translated in decimal format numbers comprised between -32768 and 32767 (I don’t remember how the problem of disparity between positives and negatives is solved). However when I decode them in int16_t I see much higher numbers that seem to fall in the 32bit signed format (but the file is in 16bit anyway). E.g., the max of my decoded audio (after int16_t translation) is 2044951012 ;

    • My .wav file has two channels, but I can’t access both two, if I use extended_data of the audio frame struct pointing to the second channel (index 1) the execution returns a segmentation fault. The same happens with the data pointer. I’m able to recover only one channel, from data[0].

    Here is how my audio file decoded with the aforementioned code and saved in a txt looks like :
    enter image description here

    Here is how the trend of the signal should look like :

    enter image description here

    If I play my decoded signal the sound shows some similarities with the original audio file, but with a lot of destructing artifacts in it.

    Final remarks : ffmpeg documentation and past questions of Stackoverflow are not working well to solve this problem.

  • Convert a YUVJ422 and YUVJ420 frame into YV12 in C++ with FFmpeg

    22 janvier 2024, par Can

    I am currently writing a program, to convert YUVJ422 and YUVJ420 frames into YV12. The YV12 frame is then uploaded to the GPU and there converted into RGB again, but that part should work fine (more on that later).

    


    1080p and 720p are working very good and performant (no delays or anything) already with my program, but 540p has a weird artifact in the bottom of the frame (8 pixels are green ish, but also kind of transparent, so copying the information about the brightness worked but the U and/or ? V plane seem to be missing something at the end.

    


    My thoughts are, that maybe because 540 is not even when dividing with 8, the copy operation misses somehting ? Also could be some padding that is not considered ? So I tried to hard-code the height to 544, before decoding and providing a height of 544 to the FFmpeg decoder, but that didn't work either and resulted in a similar output.

    


    Another reason for the green line could be, that the shader does not take any padding into account, the height provided into the shader is 540, but I am not quite sure, if the shader is the problem, as it works for the other formats and a green line seems to indicate more, that not enough data was copied, as green lines usually mean zeroed memory, as a zero would translate to green in YUV.

    


    I am now out of ideas, why the code fails for 540p formats, so I hope that someone already had this issue before maybe and provide some clarification, here is my code to convert the pixel data, keep in mind that the code is not fully optimized yet and I already planned to write shaders to convert from the YUVJ420 and YUVJ422 formats directly into RGBA as that would be much faster, but for now I have to take this "workaround" to convert the data first to YV12 for other reasons.

    


            if (mCurrentFrame->format == AV_PIX_FMT_YUVJ420P)
        {
            if (540 == mCurrentFrame->height)
            {
                int uvHeight = mCurrentFrame->height / 2;
                int offset   = 0;

                // Copy Y plane
                for (int y = 0; y < mCurrentFrame->height; ++y)
                {
                    memcpy(decodedFrame->GetData() + offset, mCurrentFrame->data[0] + mCurrentFrame->linesize[0] * y, mCurrentFrame->width);
                    offset += mCurrentFrame->width;
                }

                // Copy V plane
                for (int v = 0; v < uvHeight; ++v)
                {
                    memcpy(decodedFrame->GetData() + offset, mCurrentFrame->data[2] + mCurrentFrame->linesize[2] * v, mCurrentFrame->width / 2);
                    offset += mCurrentFrame->width / 2;
                }

                // Copy U plane
                for (int u = 0; u < uvHeight; ++u)
                {
                    memcpy(decodedFrame->GetData() + offset, mCurrentFrame->data[1] + mCurrentFrame->linesize[1] * u, mCurrentFrame->width / 2);
                    offset += mCurrentFrame->width / 2;
                }
            }
            else
            {
                int ySize  = mCurrentFrame->width * mCurrentFrame->height;
                int uvSize = (mCurrentFrame->width / 2) * (mCurrentFrame->height / 2);

                // Copy Y plane
                memcpy(decodedFrame->GetData(), mCurrentFrame->data[0], ySize);

                // Copy V plane
                memcpy(decodedFrame->GetData() + ySize, mCurrentFrame->data[2], uvSize);

                // Copy U plane
                memcpy(decodedFrame->GetData() + ySize + uvSize, mCurrentFrame->data[1], uvSize);
            }
        }
        else if (mCurrentFrame->format == AV_PIX_FMT_YUVJ422P)
        {
            int offset = 0;

            if (540 == mCurrentFrame->height)
            {
                // Copy Y plane, but linewise
                for (int y = 0; y < mCurrentFrame->height; ++y)
                {
                    memcpy(decodedFrame->GetData() + offset, mCurrentFrame->data[0] + mCurrentFrame->linesize[0] * y, mCurrentFrame->width);
                    offset += mCurrentFrame->width;
                }
            }
            else
            {
                int ySize = mCurrentFrame->width * mCurrentFrame->height;
                offset    = ySize;

                // Copy Y plane
                memcpy(decodedFrame->GetData(), mCurrentFrame->data[0], ySize);
            }

            // Copy V plane, but linewise
            for (int v = 0; v < mCurrentFrame->height; v += 2)
            {
                memcpy(decodedFrame->GetData() + offset, mCurrentFrame->data[2] + mCurrentFrame->linesize[2] * v, mCurrentFrame->width / 2);
                offset += mCurrentFrame->width / 2;
            }

            // Copy U plane, but linewise
            for (int u = 0; u < mCurrentFrame->height; u += 2)
            {
                memcpy(decodedFrame->GetData() + offset, mCurrentFrame->data[1] + mCurrentFrame->linesize[1] * u, mCurrentFrame->width / 2);
                offset += mCurrentFrame->width / 2;
            }
        }


    


    mCurrentFrame is the normal AVFrame structure from FFmpeg.

    


    I still think it might be a padding issue, but any help would be much appreciated !