Recherche avancée

Médias (91)

Autres articles (19)

  • Qualité du média après traitement

    21 juin 2013, par

    Le bon réglage du logiciel qui traite les média est important pour un équilibre entre les partis ( bande passante de l’hébergeur, qualité du média pour le rédacteur et le visiteur, accessibilité pour le visiteur ). Comment régler la qualité de son média ?
    Plus la qualité du média est importante, plus la bande passante sera utilisée. Le visiteur avec une connexion internet à petit débit devra attendre plus longtemps. Inversement plus, la qualité du média est pauvre et donc le média devient dégradé voire (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (6844)

  • Convert AVStream PTS value to real time in seconds

    15 janvier 2015, par Kamlesh

    The below code snippet gets the PTS value of different frames from a video file

    AVStream *stream = avctx->streams[avpkt.stream_index];
    if ( 0 > ( err = avcodec_decode_video2 ( stream->codec, frame, &got_frame, &avpkt ) && got_frame ) )
    {
       int64_t pts = av_frame_get_best_effort_timestamp ( frame );
       pts = av_rescale_q ( pts,  stream->time_base, AV_TIME_BASE_Q );        
    }

    The PTS value that it returns are given below.

    • 66733
    • 100100
    • 133467

    Confusion is on the time format of the above values, whether they are in milliseconds or microseconds.
    Is there any other way to get a real time PTS values of the frames, as these will be required for subtitle rendering

  • Adding current time as timestamp in h264 raw stream with few frames

    1er avril 2020, par Michaël

    I have a program that spits out an h264 raw stream (namely, screenrecord on Android). I'm using ffmpeg to add a PTS (Presentation Time Stamp) on the frames as follows :

    



    $ my-program | ffmpeg -i - -filter:v setpts='(RTCTIME - RTCSTART) / (TB * 1000000)' out.mp4


    



    This filter computes the current time, and puts it as the PTS.

    



    The trouble is that my-program does not produce any output if there is no change in the video. Since ffmpeg seems to wait for a bunch of frames before putting them through the setpts filter, the computed PTS won't be correct. In particular, the last frame of a sequence will be timestamped when the next sequence starts.

    



    Question : Is there a way (with ffmpeg or otherwise) to add current time as PTS to h264 raw frames, where "current time" is when receiving the frame, rather than outputting it ?

    



    Note : The problem is not from buffering from the pipe.

    


  • Estimate time and memory required when transcoding video FFMPEG [on hold]

    8 mars 2019, par Anh Vo Nguyen Nhat

    Currently, I am trying to predict beforehand the time it takes when transcoding a video (e.g. transcoding 1920x1080 H264 video to 1280x720 VP9) using FFMPEG tool.
    I have used the following features to build a simple neural network to predict the time :
    - Video Resolution (Input + Output)
    - Video Duration
    - Video Codec (Input + Output)
    - Video Bitrate
    - Video Framerate
    - Number of B, I, P frames

    However, the result is not really promising. I want to ask if there is any other way to estimate/predict the time it takes when transcoding a video ? Are there any other features beside the listed that affects the transcoding time ?

    Beside the transcoding time, I also need to forecast the memory required by the process. I see a linear relation between the output resolution and the consumed memory. However, when trying with different computers the consumed memory are different. For example, 128GB computer would take 57GB to transcode, but for 64GB computer, it only takes 37GB. Is there any formula to calculate the required memory from the FFMPEG transcoding algorithm ?