Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (36)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (6252)

  • Ffmpeg encoding a video with time_base Not equal to framerate does not work hardware accelerated Video Player

    10 janvier 2020, par Gilgamesh22

    I have a time_base of 90000 with a frame rate of 30. I can generate a h264 video and have it work in VLC but this video does not work in a hardware accelerated web chrome player using Intel HD Graphics 530. If I change the time_base to 30 It works fine.

    Note : I am changing the frame->pts appropriately to match the time_base.
    Note : Video does not have audio stream

    //header.h
    AVCodecContext *cctx;
    AVStream* stream;

    Here is the non working example code

    //source.cpp
    stream->time_base = { 1, 90000 };
    stream->r_frame_rate = { fps, 1 };
    stream->avg_frame_rate = { fps, 1 };

    cctx->codec_id = codecId;
    cctx->time_base = { 1 ,  90000 };
    cctx->framerate = { fps, 1 };

    // ......
    // add frame code later on timestamp are in millisecond
    frame->pts = (timestamp - startTimeStamp)* 90;

    Here is the working example code

    //source.cpp
    stream->time_base = { 1, fps};
    stream->r_frame_rate = { fps, 1 };
    stream->avg_frame_rate = { fps, 1 };

    cctx->codec_id = codecId;
    cctx->time_base = { 1 ,  fps};
    cctx->framerate = { fps, 1 };

    // ......
    //  add frame code timestamp are in millisecond
    frame->pts = (timestamp - startTimeStamp)/(1000/fps);

    Any ideas on why the second example works and the first does not in the video player.

  • ffmpeg black screen issue for video video generation from a list of frames

    11 mai 2023, par arlaine

    I used a video to generate a list of frames from it, then I wanted to create multiple videos from this list of frames.
I've set starting and ending frames indexes for each "sub video", so for example,
indexes = [[0, 64], [64, 110], [110, 234], [234, 449]], and those indexes will help my code generate 4 videos of various durations. The idea is to decompose the original video into multiple sub videos. My code is working just fine, the video generated.

    


    But every sub video start with multiple seconds of black screen, only the first generated video (so the one using indexes[0] for starting and ending frames) is generated without this black screen part. I've tried changing the frame rate for each sub_video, according to the number of frames and things like that, but I didn't work. You can find my code below

    


    for i, (start_idx, end_idx) in enumerate(self.video_frames_indexes):
    if end_idx - start_idx > 10:
        shape = cv2.imread(f'output/video_reconstitution/{video_name}/final/frame_{start_idx}.jpg').shape
        os.system(f'ffmpeg -r 30 -s {shape[0]}x{shape[1]} -i output/video_reconstitution/{video_name}/final/frame_%d.JPG'
              f' -vf "select=between(n\,{start_idx}\,{end_idx})" -vcodec libx264 -crf 25'
              f' output/video_reconstitution/IMG_7303/sub_videos/serrage_{i}.mp4')


    


    Just the ffmpeg command

    


    ffmpeg -r 30 -s {shape[0]}x{shape[1]} -i output/video_reconstitution/{video_name}/final/frame_%d.JPG -vf "select=between(n\,{start_idx}\,{end_idx})" -vcodec libx264 -crf 25 output/video_reconstitution/IMG_7303/sub_videos/serrage_{i}.mp4


    


  • Pipe video frames from ffmpeg to canvas without loading the entire video into memory

    1er janvier 2024, par Aviato

    I am working on a project that involves frame manipulation and I decided to choose node canvas API for that. I used to work with OpenCV Python and there was a cv2.VideoCapture class that takes a video as input and prepares to read the frames of the video and we can loop through the frames one at a time without having to load all the frames at once in memory.
Now I tried a lot of ways to replicate the same using ffmpeg, i.e. trying to load frames from a video in an ordered, but "on-demand," fashion.

    


    I tried using ffmpeg as a child process to process frames and standout the frames.

    


    const spawnProcess = require('child_process').spawn,
    ffmpeg = spawnProcess('ffmpeg', [
        '-i', 'test.mp4',
        '-vcodec', 'png',
        '-f', 'rawvideo',
        '-s', '1920*1080', // size of one frame
        'pipe:1'
    ]);
ffmpeg.stdout.on('data', (data) => {
    try {
        // console.log(tf.node.decodeImage(data).shape)
        console.log(`${++i} frames read`)
        //context.drawImage(data, 0, 0, width, height)
        
        
    } catch(e) {
        console.log(e)
    } 
})


    


    The value in the console shows something around 4000 + console logs, but the video only had 150 frames, after much investigating and console logging the data, I found that was buffer data, and it's not processing it for each frame. The on-data function returns the buffer data in an unstructured way
I want to read frames from a video and process each one at a time in memory, I don't want to hold all the frames at once in memory or in the filesystem.

    


    enter image description here

    


    I also want to pipe the frames in a format that could be rendered on top of a canvas using drawImage