Recherche avancée

Médias (91)

Autres articles (47)

  • Qualité du média après traitement

    21 juin 2013, par

    Le bon réglage du logiciel qui traite les média est important pour un équilibre entre les partis ( bande passante de l’hébergeur, qualité du média pour le rédacteur et le visiteur, accessibilité pour le visiteur ). Comment régler la qualité de son média ?
    Plus la qualité du média est importante, plus la bande passante sera utilisée. Le visiteur avec une connexion internet à petit débit devra attendre plus longtemps. Inversement plus, la qualité du média est pauvre et donc le média devient dégradé voire (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

  • Menus personnalisés

    14 novembre 2010, par

    MediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
    Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
    Menus créés à l’initialisation du site
    Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)

Sur d’autres sites (6341)

  • ffmpeg wasm - how to take client-side created mp4 and upload it to the same server hosting the index/js files being used

    29 juillet 2022, par John Farrell

    Ok, so Im an IT guy and kind of a noob on the dev side of the fence. But I've been able to create this ffmpeg wasm page that takes a canvas and converts it to webm / and .mp4 — what i WANT to do is take the resulting .mp4 file and upload it to the server where the page/js are being served from. is this possible ? I will include my source code which is fairly simple and straight forward, I just don't know how to manipulate the resulting mp4 file that ffmpeg spits out (i realize it is happening client side) to be able to push it up to the server (maybe with aupload.php type situation ?) the solution can be html/java/php whatever, so long as it takes the mp4 output and gets it onto the server. I'd VERY MUCH appreciate a hand here.

    


    Going to try my best to properly insert the html and js. please bear with me if i've done something wrong, i've never had to -ask- a question on here, usually just look up existing answers.

    


    

    

    const { createFFmpeg } = FFmpeg;
const ffmpeg = createFFmpeg({
  log: true
});

const transcode = async (webcamData) => {
  const message = document.getElementById('message');
  const name = 'record.webm';
  await ffmpeg.load();
  message.innerHTML = 'Start transcoding';
  await ffmpeg.write(name, webcamData);
  await ffmpeg.transcode(name,  'output.mp4');
  message.innerHTML = 'Complete transcoding';
  const data = ffmpeg.read('output.mp4');

  const video = document.getElementById('output-video');
  video.src = URL.createObjectURL(new Blob([data.buffer], { type: 'video/mp4' }));
  dl.href = video.src;
  dl.innerHTML = "download mp4"
}

fn().then(async ({url, blob})=>{
    transcode(new Uint8Array(await (blob).arrayBuffer()));
})

function fn() {
var recordedChunks = [];

var time = 0;
var canvas = document.getElementById("canvas");

return new Promise(function (res, rej) {
    var stream = canvas.captureStream(60);

    mediaRecorder = new MediaRecorder(stream, {
        mimeType: "video/webm; codecs=vp9"
    });

    mediaRecorder.start(time);

    mediaRecorder.ondataavailable = function (e) {
        recordedChunks.push(event.data);
        // for demo, removed stop() call to capture more than one frame
    }

    mediaRecorder.onstop = function (event) {
        var blob = new Blob(recordedChunks, {
            "type": "video/webm"
        });
        var url = URL.createObjectURL(blob);
        res({url, blob}); // resolve both blob and url in an object

        myVideo.src = url;
        // removed data url conversion for brevity
    }

// for demo, draw random lines and then stop recording
var i = 0,
tid = setInterval(()=>{
  if(i++ > 20) { // draw 20 lines
    clearInterval(tid);
    mediaRecorder.stop();
  }
  let canvas = document.querySelector("canvas");
  let cx = canvas.getContext("2d");
  cx.beginPath();
  cx.strokeStyle = 'green';
  cx.moveTo(Math.random()*100, Math.random()*100);
  cx.lineTo(Math.random()*100, Math.random()*100);
  cx.stroke();
},200)

});
}

    


    &#xA;&#xA;&#xA;<code class="echappe-js">&lt;script src=&quot;https://unpkg.com/@ffmpeg/ffmpeg@0.8.1/dist/ffmpeg.min.js&quot; defer&gt;&lt;/script&gt;&#xA;&lt;script src='http://stackoverflow.com/feeds/tag/canvas2mp4.js' defer&gt;&lt;/script&gt;&#xA;&#xA;&#xA;&#xA;here is a canvas
    &#xA;
    &#xA;&#xA;here is a recorded video of the canvas in webM format
    &#xA;
    &#xA;&#xA;&#xA;here is a transcoded mp4 from the webm above CLIENT SIDE using ffmpeg
    &#xA;
    &#xA;&#xA;

    &#xA;&#xA;

    &#xD;&#xA;

    &#xD;&#xA;

    &#xD;&#xA;&#xA;

  • Merging input Streams with nodejs/ffmpeg

    14 septembre 2020, par jAndy

    I'm creating a very basic and rudimentary Video-Web-Chat. On the client side, I'm going to use a simple getUserMedia API call to capture the webcam data and send video-data as data-blob to my server.

    &#xA;

    From There, I'm planning to either use the fluent-ffmpeg library or just spawn ffmpeg myself and pipe that raw data to ffmpeg, which in turn, does some magic and pushes that out as HLS stream to an Amazon AWS Service (for instance), which then gets actually displayed on a Web Browser for all participating people in the video chat.

    &#xA;

    So far, I think all of this should be fairly easy to implement, but I keep my head spinning around the question, how I can create a "combined" or "merged" frame and stream, so the output HLS data from my server to the distributing cloud service has only to be one combined data stream to receive.

    &#xA;

    If there are 3 people in that video chat, my server receives 3 data streams from those clients and combines these data streams (from the individual web-cam data sources) into one output stream.

    &#xA;

    How could that be accomplished ?&#xA;Can I "create" a new frame with ffmpeg, so to speak ? I would be very thankful if anybody could give me a heads up here, maybe I'm thinking in a complete wrong direction.

    &#xA;

    Another question which arises to me is, if I really can just "dump" any data, which I'm receiving from a binary blob created from getUserMedia or MultiStreamRecorder to ffmpeg or if I have to specify somewhere and somehow the exact codecs being used etc.?

    &#xA;

  • Starting multiple upstart services after a parent service

    16 mars 2013, par CoryG

    I'm trying to configure upstart to start an ffserver process and many (21) ffmpeg processes - the ffmpeg processes must be started after the ffserver process and all of them should be respawned if they stop.

    So far, for the ffserver process I have :

    # ffserver

    description     "ffserver"

    start on (filesystem and net-device-up IFACE=eth0) and runlevel [2345]
    stop on runlevel [!2345]

    respawn
    respawn limit 10 5

    pre-start script
       test -x /usr/local/bin/ffmpeg || { stop; exit 0; }
       test -x /usr/local/bin/ffserver || { stop; exit 0; }
    end script

    script
       /usr/local/bin/ffserver -f /etc/ffserver.conf
    end script

    post-start script
       PID=`status ffserver | egrep -oi &#39;([0-9]+)$&#39; | head -n1`
       echo $PID > /var/run/ffserver.pid
    end script

    post-stop script
       rm -f /var/run/ffserver.pid
    end script

    Which works fine for ffserver, however I would like to know how to get the ffmpeg services into a similar startup configuration managed by upstart (ideally within a single upstart config file, but I can make 21 different config files if required).

    (it might be worth noting that I'm using the NoDaemon option within the /etc/ffserver.conf file to ensure it doesn't try to daemonize itself and the ffmpeg instances will likewise not be self-daemonized - I would however like pid files for them in /var/run/ffmpegx.pid where x is an identifier [[0-15],0_1_2_3,4_5_6_7,8_9_10_11,12_13_14_15,all] for some other reasons)