Recherche avancée

Médias (91)

Autres articles (13)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

Sur d’autres sites (2960)

  • Using Unicast RTSP URIs via ffmpeg

    6 août 2019, par Chris Marshall

    I’m fairly new to ffmpeg, so I’d certainly appreciate being given an "M" to "RTFM." The ffmpeg docs are...not-so-easy...to navigate, but I’m trying.

    The goal is to develop a compiled server that incorporates ffmpeg, but first, I need to get it working via CLI.

    I have a standard AXIS Surveillance camera (AXIS M5525-E), set up as an ONVIF device (but that isn’t really relevant to this issue).

    When I query it, I get the following URI as its video streaming URI :

    rtsp://192.168.4.12/onvif-media/media.amp?profile=profile_1_jpeg&streamtype=unicast

    The IP is local to a sandboxed network.

    I add the authentication parameters to it, like so :

    rtsp://<login>:<password>@192.168.4.12/onvif-media/media.amp?profile=profile_1_jpeg&amp;streamtype=unicast
    </password></login>

    (Yeah, I know that’s not secure, but this is just for testing and feasibility study. The whole damn sandbox is an insecure mess).

    Now, if I use VLC to open the URI, it works great (of course). Looking at it with a packet analyzer, I see the following negotiation between the device and my computer (at .2 - Clipped for brevity) :

    Id = 11
    Source = 192.168.4.12
    Destination = 192.168.4.2
    Captured Length = 82
    Packet Length = 82
    Protocol = TCP
    Date Received = 2019-08-06 12:18:37 +0000
    Time Delta = 1.342024087905884
    Information = 554 -> 53755 ([ECN, ACK, SYN], Seq=696764098, Ack=3139240483, Win=28960)
                       °
                       °
                       °
    Id = 48
    Source = 192.168.4.12
    Destination = 192.168.4.2
    Captured Length = 366
    Packet Length = 366
    Protocol = TCP
    Date Received = 2019-08-06 12:18:38 +0000
    Time Delta = 2.09382700920105
    Information = 554 -> 53755 ([ACK, PUSH], Seq=696765606, Ack=3139242268, Win=1073)

    Followed immediately by UDP stream packets.

    If, however, I feed the same URI to ffmpeg :

    ffmpeg -i rtsp://<login>:<password>@192.168.4.12/onvif-media/media.amp?profile=profile_1_jpeg&amp;streamtype=unicast -c:v libx264 -crf 21 -preset veryfast -g 30 -sc_threshold 0 -f hls -hls_time 4 /Volumes/Development/webroot/fftest/stream.m3u8
    </password></login>

    I get nothing. No negotiation at all between the device and my computer.

    After that, if I then remove the &amp;streamtype=unicast argument, I get a negotiation, and a stream :

    Id = 10
    Source = 192.168.4.12
    Destination = 192.168.4.2
    Captured Length = 82
    Packet Length = 82
    Protocol = TCP
    Date Received = 2019-08-06 10:37:48 +0000
    Time Delta = 3.047425985336304
    Information = 554 -> 49606 ([ECN, ACK, SYN], Seq=457514925, Ack=2138974173, Win=28960)
                       °
                       °
                       °
    Id = 31
    Source = 192.168.4.12
    Destination = 192.168.4.2
    Captured Length = 345
    Packet Length = 345
    Protocol = TCP
    Date Received = 2019-08-06 10:37:49 +0000
    Time Delta = 3.840152025222778
    Information = 554 -> 49606 ([ACK, PUSH], Seq=457516393, Ack=2138975704, Win=1039)

    I will, of course, be continuing to work out why this is [not] happening, and will post any solutions that I find, but, like I said, I’m fairly new to this, so it’s entirely possible that I’m missing some basic stuff, and would appreciate any guidance.

    Thanks !

  • Generating Video from Downloaded Images Using Fluent-FFmpeg : Issue with Multiple Image Inputs

    11 août 2023, par Pratham Bhagat

    I am having trouble creating Video from multiple images using fluent-ffmpeg in node.js.

    &#xA;

    Here, I am getting the images from rquest body and downloading them in **temp **directory

    &#xA;

       const imageUrls = req.body.imageUrls;&#xA;   const timeInBetween = parseFloat(req.query.time_in_between) || 1.0;&#xA;&#xA;const tempDir = path.join(&#xA;      context.executionContext.functionDirectory,&#xA;      "temp"&#xA;    );&#xA;&#xA;const downloadedImages = await Promise.all(&#xA;      imageUrls.map(async (imageUrl, index) => {&#xA;        try {&#xA;          const response = await axios.get(imageUrl, {&#xA;            responseType: "arraybuffer",&#xA;          });&#xA;          const imageName = `image_${index &#x2B; 1}.png`;&#xA;          const imagePath = path.join(tempDir, imageName);&#xA;          await fs.writeFile(imagePath, response.data);&#xA;          return imagePath;&#xA;        } catch (error) {&#xA;          context.log(`Error downloading ${imageUrl}: ${error.message}`);&#xA;          return null;&#xA;        }&#xA;      })&#xA;    );&#xA;

    &#xA;

    I want to combine these images that are in downloadedImages array and create a video

    &#xA;

    const outputVideoPath = path.join(tempDir, "output.mp4");&#xA;&#xA;    let ffmpegCommand = ffmpeg();&#xA;&#xA;    for (let i = 0; i &lt; downloadedImages.length; i&#x2B;&#x2B;) {&#xA;      context.log(downloadedImages.length);&#xA;      ffmpegCommand&#xA;        .input(downloadedImages[i])&#xA;&#xA;        .inputOptions(["-framerate", `1/${timeInBetween}`])&#xA;        .inputFormat("image2")&#xA;        .videoCodec("libx264")&#xA;        .outputOptions(["-pix_fmt", "yuv420p"]);&#xA;    }&#xA;&#xA;    ffmpegCommand&#xA;      .output(outputVideoPath)&#xA;      .on("end", () => {&#xA;        context.log("Video generation successful.");&#xA;        context.res = {&#xA;          status: 200,&#xA;          body: "Video generation and cleanup successful.",&#xA;        };&#xA;      })&#xA;      .on("error", (err) => {&#xA;        context.log.error("Error generating video:", err.message);&#xA;        context.res = {&#xA;          status: 500,&#xA;          body: "Error generating video: " &#x2B; err.message,&#xA;        };&#xA;      })&#xA;      .run();&#xA;

    &#xA;

    By running it and giving value of "time_in_between" as 2 I get video of 2 seconds with a single image.

    &#xA;

      &#xA;
    • Utilized Fluent-FFmpeg library to generate a video from a list of downloaded images.
    • &#xA;

    • Expected the video to include all images, each displayed for a specified duration.
    • &#xA;

    • Tried mapping through the image paths and using chained inputs for each image.
    • &#xA;

    • Expected the video to have a sequence of images displayed.
    • &#xA;

    • Observed that the generated video only contained the first image and was of 0 seconds duration.
    • &#xA;

    &#xA;

  • Converting images to video in Android using FFmpeg

    28 février 2024, par sneha Latha
    val cm = "-f image2 -i \"$imagesFolder/JPEG_08%d_06%d_.jpg\" -vcodec mpeg4 -b 800k output.mp4"&#xA;

    &#xA;

    this command is not able to .convert the images into video my images are in the format below :

    &#xA;

    JPEG_20240228_115618_  &#xA;JPEG_20240228_115622_&#xA;

    &#xA;

    Im using the below code :

    &#xA;

        fun convertImagesToVideo(imageList: List<file>, outputVideoPath: String, frameRate: Int) {&#xA;        val inputFiles = imageList.joinToString(" ") { it.absolutePath }&#xA;        val imagesFolder = File(getExternalFilesDir(Environment.DIRECTORY_PICTURES), "camerax")&#xA;&#xA;        val cm = "-f image2 -i \"$imagesFolder/JPEG_08%d_06%d_.jpg\" -vcodec mpeg4 -b 800k output.mp4"&#xA;&#xA;&#xA;        val cmd = arrayOf(&#xA;            "-framerate", frameRate.toString(),&#xA;            inputFiles,&#xA;            "-c:v", "mpeg4",&#xA;            "-pix_fmt", "yuv420p",&#xA;            outputVideoPath&#xA;        ).joinToString(" ")&#xA;&#xA;        executeFfmpegCommand(cm, outputVideoPath)&#xA;    }&#xA;&#xA;    fun executeFfmpegCommand(exe: String, filePath: String) {&#xA;&#xA;        //creating the progress dialog&#xA;        val progressDialog = ProgressDialog(this@FolderListActivity)&#xA;        progressDialog.setCancelable(false)&#xA;        progressDialog.setCanceledOnTouchOutside(false)&#xA;        progressDialog.show()&#xA;&#xA;        /*&#xA;            Here, we have used he Async task to execute our query because if we use the regular method the progress dialog&#xA;            won&#x27;t be visible. This happens because the regular method and progress dialog uses the same thread to execute&#xA;            and as a result only one is a allowed to work at a time.&#xA;            By using we Async task we create a different thread which resolves the issue.&#xA;         */&#xA;        FFmpegKit.executeAsync(exe, { session ->&#xA;            val returnCode = session.returnCode&#xA;            lifecycleScope.launch(Dispatchers.Main) {&#xA;                if (returnCode.isValueSuccess) {&#xA;                    binding.videoView.setVideoPath(filePath)&#xA;                    //change the video_url to filePath, so that we could do more manipulations in the&#xA;                    //resultant video. By this we can apply as many effects as we want in a single video.&#xA;                    //Actually there are multiple videos being formed in storage but while using app it&#xA;                    //feels like we are doing manipulations in only one video&#xA;                    input_video_uri = filePath&#xA;                    //play the result video in VideoView&#xA;                    binding.videoView.start()&#xA;                    progressDialog.dismiss()&#xA;                    Toast.makeText(this@FolderListActivity, "Filter Applied", Toast.LENGTH_SHORT).show()&#xA;                } else {&#xA;                    progressDialog.dismiss()&#xA;                    Log.d("TAG", session.allLogsAsString)&#xA;                    Toast.makeText(this@FolderListActivity, "Something Went Wrong!", Toast.LENGTH_SHORT)&#xA;                        .show()&#xA;                }&#xA;            }&#xA;        }, { log ->&#xA;            lifecycleScope.launch(Dispatchers.Main) {&#xA;                progressDialog.setMessage("Applying Filter..${log.message}")&#xA;            }&#xA;        }) { statistics -> Log.d("STATS", statistics.toString()) }&#xA;    }&#xA;</file>

    &#xA;