Recherche avancée

Médias (91)

Autres articles (49)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (4493)

  • How to covert video and extract frame from video using FFMPEG in single command

    13 juillet 2013, par kheya

    I am trying to convert and extract image frame from a video file in single command
    I can do this in 2 steps but I want to use pipe like technique to do this

    This is what I have :
    for %%a in ("*.avi") do ffmpeg -i "%%a" -c:v libx264 -preset slow -crf 20 -c:a libvo_aacenc -b:a 128k "%%~na.mp4" <— converts correctly

    I need to incorporate this extract command :

    ffmpeg -i inputfile.avi  -r  1  -t  4  image-%d.jpeg

    Merging two commands giving error.

    How do I do it ?

    EDIT :
    This is what I have. But it just converts the video, no jpg image created as second output

    for %%a in ("*.avi") do ffmpeg -i "%%a" -c:v libx264 -preset slow -crf 20 -c:a libvo_aacenc -b:a 128k "%%~na.mp4" | ffmpeg -r 1 -s 4cif "%%~na.jpeg"
  • ffmpeg encoding a video with time_base Not equal to framerate does not work in HML5 video players

    1er juillet 2019, par Gilgamesh22

    I have a time_base of 90000 with a frame rate of 30. I can generate a h264 video and have it work in VLC but this video does not work in the browser player. If I change the time_base to 30 It works fine.

    Note : I am changing the frame->pts appropriately to match the time_base.
    Note : Video does not have audio stream

    //header.h
    AVCodecContext *cctx;
    AVStream* stream;

    Here is the non working example code

    //source.cpp
    stream->time_base = { 1, 90000 };
    stream->r_frame_rate = { fps, 1 };
    stream->avg_frame_rate = { fps, 1 };

    cctx->codec_id = codecId;
    cctx->time_base = { 1 ,  90000 };
    cctx->framerate = { fps, 1 };

    // ......
    // add frame code later on timestamp are in millisecond
    frame->pts = (timestamp - startTimeStamp)* 90;

    Here is the working example code

    //source.cpp
    stream->time_base = { 1, fps};
    stream->r_frame_rate = { fps, 1 };
    stream->avg_frame_rate = { fps, 1 };

    cctx->codec_id = codecId;
    cctx->time_base = { 1 ,  fps};
    cctx->framerate = { fps, 1 };

    // ......
    //  add frame code timestamp are in millisecond
    frame->pts = (timestamp - startTimeStamp)/(1000/fps);

    Any ideas on why the second example works and the first does not in the HTML5 video player.

  • Prevent suspend event when streaming video via HTML video tag

    24 septembre 2014, par jasongullickson

    I seem to be having the opposite problem of most people who are streaming video using the HTML video tag ; I’m saturating the client with data.

    When playing a long video served via ffserver (webm container) everything works great but eventually the browser (Chrome in this case) will begin throwing "suspend" events. After a number of these ( 50-100), a "stalled" event will fire and playback will stop.

    I believe the problem is that once Chrome has buffered a certain amount of video it goes into "suspend" and stops downloading more data. I’ve tested this theory by throttling the speed at which video data is delivered, and if I keep the delivered frame rate close to the playback rate, I can prevent this from happening, but of course deliberately holding back server performance isn’t ideal.

    What I’m looking for is either a way to suppress this "suspend" behavior altogether, or alternatively a way to respond to the event that prevents the eventual "stalled" state.

    Presumably the browser at some point exits the "suspend" state and begins requesting data again, but I haven’t actually observed this occurring. I’m using a chain of mpeg2 -> ffmpeg -> ffserver to stream the video so if the browser is attempting to resume loading data I don’t see the request in my application. I could use a proxy or a sniffer to watch for the traffic but I would expect that maybe there is an ffserver log that can tell me the same thing ? In any event if it’s attempting to resume the download it’s failing, and there’s no indication server-side that there’s a reason for the request to fail (in fact I can pull up the same video feed from ffserver and see it playing correctly).

    So I feel like I’ve isolated this to a client-side playback issue, and one where the browser is voluntarily giving up on loading the data, but I’m not sure how to convince it to "not do that", or at least attempt to resume when it runs the buffer dry.