Recherche avancée

Médias (91)

Autres articles (112)

  • Soumettre améliorations et plugins supplémentaires

    10 avril 2011

    Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
    Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

Sur d’autres sites (3104)

  • Rails 5 - Video streaming using Carrierwave uploaded video size constraint on the server

    21 mars 2020, par Milind

    I have a working Rails 5 apps using Reactjs for frontend and React dropzone uploader to upload video files using carrierwave.

    So far, what is working great is listed below -

    1. User can upload videos and videos are encoded based on the selection made by user - HLS or MPEG-DASH for online streaming.
    2. Once the video is uploaded on the server, it starts streaming it by :-
      • FIRST,copying video on /tmp folder.
      • Running a bash script that uses ffmpeg to transcode uploaded video using predefined commands to produce new fragments of videos inside /tmp folder.
      • Once the background job is done, all the videos are uploaded on AWS S3, which is how the default carrierwave works
    3. So, when multiple videos are uploaded, they are all copied in /tmp folder and then transcoded and eventually uploaded to S3.

    My questions, where i am looking some help are listed below -

    1- The above process is good for small videos, BUT what if there are many concurrent users uploading 2GB of videos ? I know this will kill my server as my /tmp folder will keep on increasing and consume all the memory, making it to die hard.How can I allow concurrent videos to upload videos without effecting my server’s memory consumption ?

    2- Is there a way where I can directly upload the videos on AWS-S3 first, and then use one more proxy server/child application to encode videos from S3, download it to the child server, convert it and again upload it to the destination ? but this is almost the same but doing it on cloud, where memory consumption can be on-demand but will be not cost-effective.

    3- Is there some easy and cost-effective way by which I can upload large videos, transcode them and upload it to AWS S3, without effecting my server memory. Am i missing some technical architecture here.

    4- How Youtube/Netflix works, I know they do the same thing in a smart way but can someone help me to improve this ?

    Thanks in advance.

  • Connect a remote Ip camera as a Webrtc client

    5 avril 2017, par idosh

    I have 2 cameras :

    • An internal webcam embedded in my laptop.
    • A remote IP camera that is connected to my laptop through Wifi (transmits TCP, raw H264 data - no container). I’m getting the stream using node.js.

    My goal is to create a Webrtc network and connect the remote camera as another client.

    I’m trying to figure out possible solutions :

    • My naive thinking was that I would stream the remote camera payload to the browser. But as I came to understand the browser can’t handle the stream without a container. Fair enough. But I don’t understand why it does handle the video stream that arrives from my internal camera (from the navigator.getUserMedia() function). what’s the difference between the two streams ? why can’t I mimic the stream from the remote camera as the input ?
    • To bypass this problem I thought about creating a virtual camera using Manycam (or Manycam like app). To accomplish that I need to convert my TCP stream into an RTP stream (in order to feed Manycam). Though I did saw some info in ffmpeg command line, I couldn’t find info in their node.js api package "fluent-ffmpeg". Is it possible to do it using fluent-ffmpeg ? Or only using the command line tool ? Would it require another rtp server in the middle such as this one ?.
    • Third option I read about is using node.js as a client in Webrtc. I saw it was implemented in "simple-peer". I tried it out using their co-work with socket.io (socket.io-p2p). unfortunately I couldn’t get it to work / : When i’m trying to create a socket/peer in the server - it throws errors, as it expect options that are only available on the client-side (like window, location, etc.). Am I doing something wrong ? maybe there is more suitable framework for this matter ?
    • Forth option is to use a streaming server in the middle such as Kurnto. From my understanding it receives rtp as an input and transmits it as a webrtc client. I feel it’s the most excessive option, but maybe it’s not so bad (I have to admit that I haven’t investigate this option yet).

    any thoughts ?

    thanks !

  • FFMPEG - How to wait until all blobs are written before finishing ffmpeg process when getting them from media recorder API

    7 novembre 2020, par Caio Nakai

    I'm using media recorder API to record a video from user's screen and sending the blobs through web socket to a nodejs server. The nodejs server is using the blobs to create a webm video file, the video is being created fine but with a delay, after the user clicks on the stop recording button it stops the media recorder api, however the server didn't finish the processing of all blobs (at least that's what I think it's happening) and then when I check the video file generated the last few seconds of the recording are missing I wonder if there's an way to solve this. Any help is appreciated :)

    


    This is the front-end code that sends the blobs to the nodejs server

    


      const startScreenCapture = async () => {
  try {
    let screenStream;
    videoElem = document.getElementById("myscreen");
    screenStream = await navigator.mediaDevices.getDisplayMedia(
      displayMediaOptions
    );

    const recorderOptions = {
      mimeType: "video/webm;codecs=vp9",
      videoBitsPerSecond: 3 * 1024 * 1024,
    };

    screenMediaRecorder = new MediaRecorder(screenStream, recorderOptions);
    screenMediaRecorder.start(1); // 1000 - the number of milliseconds to record into each Blob
    screenMediaRecorder.ondataavailable = (event) => {
      console.debug("Got blob data:", event.data);
      console.log("Camera stream: ", event.data);
      if (event.data && event.data.size > 0) {
        socket.emit("screen_stream", event.data);
      }
    };

    videoElem.srcObject = screenStream;
    // console.log("Screen stream", screenStream);
    // socket.emit("screen_stream", screenStream);
  } catch (err) {
    console.error("Error: " + err);
  }
};

const stopCapture = (evt) => {
  let tracks = videoElem.srcObject.getTracks();

  tracks.forEach((track) => track.stop());
  videoElem.srcObject = null;
  screenMediaRecorder.stop();
  socket.emit("stop_screen");
  socket.close();
};


    


    This is the nodejs back-end that handle the blobs and generates the videofile

    


      const ffmpeg2 = child_process.spawn("ffmpeg", [
    "-i",
    "-",
    "-c:v",
    "copy",
    "-c:a",
    "copy",
    "screen.webm",
  ]);


  socket.on("screen_stream", (msg) => {
    console.log("Writing screen blob! ");
    ffmpeg2.stdin.write(msg);
  });

  socket.on("stop_screen", () => {
    console.log("Stop recording..");
  });