Recherche avancée

Médias (2)

Mot : - Tags -/kml

Autres articles (1)

  • Déploiements possibles

    31 janvier 2010, par

    Deux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
    L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
    Version mono serveur
    La version mono serveur consiste à n’utiliser qu’une (...)

Sur d’autres sites (1483)

  • How to pipe rgba data using Node.js to ffmpeg ?

    7 juillet 2015, par Swoth

    Context
    I am developing a small programm that is supposed to render a video, as fast as possible, based on frames captured from a canvas. The animation is rendered to a headless canvas implementation using Rekapi (JavaScript animation framework). The headless canvas is a Node.js module called node-canvas by Automattic. The animation frames are rendered one after another, after each rendering the frame is retrieved using canvas.getImageData().data (Uint8ClampedArray - rgba, faster than canvas.toDataUrl) and put into an array. Every frame is supposed to be send to ffmpeg to create a video.

    Rekapi -> canvas -> getImageData -> array -> ffmpeg

    Problem
    Everything except the transfer of stored rgba-array-data to ffmpeg works. I don’t seem to manage the transport using Node.js. How do I pass my frames to ffmpeg using Node.js and what did I do wrong ?

    What I do
    The code below renders and saves each frame as rgba data to an array. The renderScene-function renders each animation frame.

    console.log("Rendering getImageData, length:"+ videoLength);
    var dataArray = [];
    function imageDataCallback() {
       dataArray.push(context.getImageData(0, 0, 1280, 720).data);
    }
    rekapi.on('afterUpdate', imageDataCallback);

    var time = Date.now();
    renderScene(rekapi);
    console.log('Time used: ' + (Date.now() - time) + 'ms;');
    rekapi.off('afterUpdate', imageDataCallback);

    I already tried various possibilities to pipe that data to ffmpeg. In general I created a child process in Node.js executing ffmpeg :

    var spawn = require('child_process').spawn;    
    var child = spawn('ffmpeg', [
           '-pix_fmt', 'rgba',
           '-s','1280x720',
           '-r', 25,
           '-f', 'rawvideo',
           '-vcodec', 'rawvideo',
           '-i', '-', // read frames from stdin
           '-threads', 0, // use all cores
           'test.mpg']);

    I also tried using -i pipe:0, which is the same as -i -, just to be sure.
    After process creation i registered for various events to know what happens :

    child.on('error', function(error){
       console.log(error);
    });
    child.stdin.on('data', function (data) {
       console.log('data retrieved')
    });
    child.on('exit', function (code) {
       console.log('child exit with code:' + code)
    });

    Now I write my array data to the child’s stdin :

    for(var i = 0; i < dataArray.length; ++i){
       var buffer = new Buffer(dataArray[i]);
       child.stdin.write(buffer);
       console.log('wrote: ' + i);
    }

    I wrote 25 frames this way. The console displays the following :

    wrote: 24
    wrote: 25
    events.js:85
         throw er; // Unhandled 'error' event
               ^
    Error: read ECONNRESET
       at exports._errnoException (util.js:746:11)
       at Pipe.onread (net.js:559:26)

    ffmpeg generated a 0 byte test.mpg and as it seems no definded child-callback (like data) except error has been called. I am not 100% sure about the lifecycle but, as I understood, data should be called each time I write something.

    I am very new to Node.js, thus I might not understand the big picture of it’s child processes.

    Since my reputation is too low I am not allowed to post more than 2 links (first question) and I don’t feel comfortable using non-typed languages like JavaScript.

  • FFmpegMediaPlayer : findLibrary returned null

    8 août 2015, par IceJOKER

    I use https://github.com/wseemann/FFmpegMediaPlayer in my applicaton, but some Adndroid device throw exception :

    java.lang.ExceptionInInitializerError
    at ru.mypackage.PlayService.initPlayer(PlayService.java:74)
    at ru.mypackage.PlayService.onCreate(PlayService.java:68)
    at android.app.ActivityThread.handleCreateService(ActivityThread.java:1949)
    at android.app.ActivityThread.access$2500(ActivityThread.java:117)
    at android.app.ActivityThread$H.handleMessage(ActivityThread.java:989)
    at android.os.Handler.dispatchMessage(Handler.java:99)
    at android.os.Looper.loop(Looper.java:130)
    at android.app.ActivityThread.main(ActivityThread.java:3687)
    at java.lang.reflect.Method.invokeNative(Native Method)
    at java.lang.reflect.Method.invoke(Method.java:507)
    at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:867)
    at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:625)
    at dalvik.system.NativeStart.main(Native Method)
    Caused by: java.lang.UnsatisfiedLinkError: Couldn't load avutil: findLibrary returned null
    at java.lang.Runtime.loadLibrary(Runtime.java:429)
    at java.lang.System.loadLibrary(System.java:554)
    at wseemann.media.FFmpegMediaPlayer.<clinit>(FFmpegMediaPlayer.java:620)
    ... 13 more
    </clinit>

    My project :
    enter image description here

    [![enter image description here][3]][3]

    Can somebody explain me what’s wrong there ?
    On my device and some other device the app working nice, but on some device (for example : Galaxy Ace (GT-S5830i) Android 2.3.3 - 2.3.7) it throw the exception.

    p.s. about "lib" prefix I understood ( http://developer.android.com/intl/ru/reference/java/lang/System.html#mapLibraryName(java.lang.String) )

  • Is there a way to use ffmpeg audio filters to automatically synchronize 2 streams with similar content

    29 mai 2015, par user3741412

    I have a situation where I have a video capture of HD content via HDMI with audio from a sound board that goes through a impedance drop into a microphone input of a camcorder. That same signal is split at line level to a ’line in’ jack on the same computer that is capturing the HDMI. Alternatively I can capture the audio via USB from the soundboard which is probably the best plan, but carries with it the same issue.

    The point is that the line in or usb capture will be much higher quality than the one on HDMI because the line out -> impedance change -> mic in path generates inferior quality in that simply brushing the mic jack on the camera while trying to change the zoom (close proximity) can cause noise on the recording.

    So I can do this today :

    • Take the good sound and the camera captured sound and load each into
      audacity and pretty quickly use the timeshift toot to perfectly fit
      the good audio to the questionable audio from the HDMI capture and
      cut the good audio to the exact size of the video. Then I can use
      ffmpeg or other video editing software to replace the questionable
      audio with the better audio.

    But while somewhat quick and easy, it always carries with it a bit of human error and time. I’d like to automate this if possible as this process is repeated at least weekly throughout the year.

    Does anyone have a suggestion if any of these ideas have merit or could suggest another approach ?

    1. I suspect but have yet to confirm that the system timestamp of the start time may be recorded in both audio captured with something like Audacity, or the USB capture tool from the sound board as well as the HDMI mpeg-2 video. I tried ffprobe on a couple audacity captured .wav files but didn’t see anything in the results about such a time code, but perhaps other audio formats or other probing tools may include this info. Can anyone advise if this is common with any particular capture tools or file formats ?

      • if so, I think I could get best results by extracting this information and then using simple adelay and atrim filters in ffmpeg to sync reliably directly from the two sources in one ffmpeg call. This is all theoretical for me right now— I’ve never tried either of these filters yet— just trying to optimize against blind alleys by asking for advice up front.
    2. If such timestamps are not embedded, possibly I can use the file system timestamp for the same idea expressed in 1a, but I suspect the file open of the two capture tools may have different inherant delays. Possibly these delays will be found to be nearly constant and the approach can work with a built-in constant anticipation delay but sounds messy and less reliable than idea 1. Still, I’d take it, if it turns out reasonably reliable

    3. Are there any ffmpeg or general digital audio experts out there that know of particular filters that can be employed on the actual data to look for similarities like normalizing the peak amplitudes or normalizing the amplification of the two to some RMS value and then stepping through a short 10 second snippet of audio, moving one time stream .01s left against the other repeatedly and subtracting the two and looking for a minimum ? Sounds like it could take a while, but if it could do this in less than a minute and be reliable, I suspect it could work. But I have only rudimentary knowledge of audio streams and perhaps what I suggest is just not plausible— but since each stream starts with the same source I think there should be a chance. I am just way out of my depth as to how to go down this road, so if someone out there knows such magic or can throw me some names of filters and example calls, I can explore if I can make it work.

    4. any hardware level suggestions to take a line level output down to a mic level input and not have the problems I am seeing using a simple in-line impedance drop module, so that I can simply rely on the audio from the HDMI ?

    Thanks in advance for any pointers or suggestinons !