Recherche avancée

Médias (0)

Mot : - Tags -/metadatas

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (20)

  • Qualité du média après traitement

    21 juin 2013, par

    Le bon réglage du logiciel qui traite les média est important pour un équilibre entre les partis ( bande passante de l’hébergeur, qualité du média pour le rédacteur et le visiteur, accessibilité pour le visiteur ). Comment régler la qualité de son média ?
    Plus la qualité du média est importante, plus la bande passante sera utilisée. Le visiteur avec une connexion internet à petit débit devra attendre plus longtemps. Inversement plus, la qualité du média est pauvre et donc le média devient dégradé voire (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (6542)

  • ffmpeg h.264 invalid cutting

    1er mai 2012, par E.Ar

    I have an s3 bucket with several hundreds video files.
    Those files were created by cutting parts of larger video files using ffmpeg.
    I wrote a script for this, which downloads the original video file from another bucket, runs ffmpeg to cut the file, and uploads the new file to it's bucket.
    For downloading and uploading from/to s3 i used this php library.
    The ffmpeg syntax I used :

    ffmpeg -y -vsync 2 -async 1 -ss [time-in] -t [duration] -i [large-input-video.mp4] -vcodec copy -acodec copy [short-output-video.mp4]

    Which should just cut the original file between the specified times, without any changes to the a/v codecs.
    All the original video files are encoded in h.264, and this is also the required encoding for the new files (which will be streamed through a CDN to the clients' flash players).

    My problem is that only a small part of the new files are coming out as encoded in h.264, but most of them aren't (h.264 is a must, otherwise the files wont play on the clients' side).
    I can't trace the problem to the original videos, since when i use the same ffmpeg command manually, with the same parameters and on the same files, the output files come out just fine. It seems arbitrary.

    I use ffprobe to get information about the files' codecs.
    For example :
    ffprobe of one of the large (original) video files :

    ...
    Stream #0.0(und) : Video : h264, yuv420p, 640x352, 499 kb/s, 25 fps, 25 tbr, 90k tbn, 50 tbc
    ...

    ffprobe of the corresponding new cut file :

    ...
    Stream #0.0(und) : Video : mpeg4, yuv420p, 640x352 [PAR 1:1 DAR 20:11], 227 kb/s, 25 fps, 25 tbr, 25 tbn, 25 tbc
    ...

    As can be seen, the difference is in 'mpeg4' vs. 'h264'.

    Any insights on what can cause the new files to come out in the wrong encoding would be greatly appreciated.

    Thanks !

    Edit : Problem Resolved
    After analyzing all the files, I noticed that about two thirds of them are coming out in the wrong codec.
    Since I used three machines for the cutting process (three separate EC2 servers), it occurred to me that on two of them there is a bad installation of ffmpeg (as @LordNeckbeard suggested in his answer).
    I ran the process again, only on the invalid files, on the third machine alone - which produced the desired result.

  • How to get the thumbnail of base64 encoded video file in Nodejs ?

    3 octobre 2018, par Wai Yan Hein

    I am developing a web application using Nodejs. I am using Amazon S3 bucket to store files. What I am doing now is that when I upload a video file (mp4) to the S3 bucket, I will get the thumbnail photo of the video file from the lambda function. For fetching the thumbnail photo of the video file, I am using this package - https://www.npmjs.com/package/ffmpeg. I tested the package locally on my laptop and it is working.

    Here is my code tested on my laptop

    var ffmpeg = require('ffmpeg');

    module.exports.createVideoThumbnail = function(req, res)
    {
       try {
           var process = new ffmpeg('public/lalaland.mp4');
           process.then(function (video) {

               video.fnExtractFrameToJPG('public', {
                   frame_rate : 1,
                   number : 5,
                   file_name : 'my_frame_%t_%s'
               }, function (error, files) {
                   if (!error)
                       console.log('Frames: ' + files);
                   else
                       console.log(error)
               });

           }, function (err) {
               console.log('Error: ' + err);
           });
       } catch (e) {
           console.log(e.code);
           console.log(e.msg);
       }
       res.json({ status : true , message: "Video thumbnail created." });
    }

    The above code works well. It gave me the thumbnail photos of the video file (mp4). Now, I am trying to use that code in the AWS lambda function. The issue is the above code is using video file path as the parameter to fetch the thumbnails. In the lambda function, I can only fetch the base 64 encoded format of the file. I can get id (s3 path) of the file, but I cannot use it as the parameter (file path) to fetch the thumbnails as my s3 bucket does not allow public access.

    So, what I tried to do was that I tried to save the base 64 encoded video file locally in the lambda function project itself and then passed the file path as the parameter for fetching the thumbnails. But the issue was that AWS lamda function file system is read-only. So I cannot write any file to the file system. So what I am trying to do right now is to retrieve the thumbnails directly from the base 64 encoded video file. How can I do it ?

  • Ream-time watermarking with MPEG-DASH

    14 juillet 2016, par Calvin W.

    In the system, I want to add a unique watermark (e.g. IP address of client and time stamp) into the video that he/she want to watch.

    But when I handled it with OpenCV, it spent 25 minute with a 15-min video. And I need to transcode to mp4 with ffmpeg.

    Now I’m trying the watermark function of ffmpeg, bit it still needs some time.

    It it possible to send the video to client side with MPEG-DASH while transcoding it with ffmpeg ?

    System spec :(Amazon EC2 c3.xlarge)
    Intel Xeon E5-2680 v2 (Ivy Bridge) - 4 vCPU
    7.5G RAM
    40GB SSD
    Ubuntu 14.04 LTS
    OpenCV2.4.13
    ffmpeg 3.1.1

    code :

    import cv2
    import sys
    import time
    from datetime import datetime as dt

    # frame of input video
    fps = float(sys.argv[4])
    # encode to AVC
    fourcc = cv2.cv.CV_FOURCC('A', 'V', 'C', '1')
    # transparency of text
    alpha = 0.1
    beta = 1 - alpha

    # input video
    cap = cv2.VideoCapture(sys.argv[3])

    # current frame index, start from 0
    frameIndex = 0

    # get input video's width/height
    width = int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH))
    height = int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT))

    # config output (error using .mp4)
    out = cv2.VideoWriter('output.avi', fourcc, fps, (width, height))

    # access time
    timeStr = dt.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S')

    requestIP = sys.argv[1]
    username = sys.argv[2]
    text = "%s %s %s" % (requestIP, username, timeStr)


    # start loading video
    while(cap.isOpened()):
       ret, frame = cap.read()
       if ret:
           # add text between 10s - 20s
           if frameIndex > time10 and frameIndex < time20:
               # clone a new frame to add text
               overlay = frame.copy()
               cv2.putText(overlay, text, (100, 100), cv2.FONT_HERSHEY_PLAIN, 0.5, (255, 255, 255))
               # combine both frame and make text transparent
               cv2.addWeighted(overlay, alpha, frame, beta, 0, frame)
           # write frame to output
           out.write(frame)
           frameIndex += 1
       # wait for next frame
       if cap.get(cv2.cv.CV_CAP_PROP_POS_FRAMES) == cap.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT):
           break
    # End of video
    # release
    cap.release()
    out.release()