Recherche avancée

Médias (1)

Mot : - Tags -/artwork

Autres articles (25)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

Sur d’autres sites (6159)

  • ffmpeg convert variable framerate .webm to constant framerate video

    4 novembre 2019, par Dashadower

    I have a .webm file of a recording of a game at 16fps. However, upon trying to process the video with OpenCV, it seems the video is recorded with a variable framerate, so when I try to use OpenCV to get a frame every second by getting the every 16th frame, it won’t work since the video stream will end prematurely.

    Therefore, I’m trying to convert a variable-frame .webm video, which claims it has a framerate of 16 fps, to a video with a constant frame, so I can extract one frame for every second. I’ve tried the following ffmpeg command from https://ffmpeg.zeranoe.com/forum/viewtopic.php?t=5518 :

    ffmpeg -i input.webm -c:v copy -b:v copy -r 16 output.webm

    However, the following error will occur :

    [NULL @ 00000272ccbc0c40] [Eval @ 000000bc11bfe2f0] Undefined constant or missing '(' in 'copy'
    [NULL @ 00000272ccbc0c40] Unable to parse option value "copy"
    [NULL @ 00000272ccbc0c40] Error setting option b to value copy.

    Error setting up codec context options.

    Here’s is the code I’m trying to use to process a frame every second :

    video = cv2.VideoCapture(test_mp4_vod_path)
    print("Opened ", test_mp4_vod_path)
    print("Processing MP4 frame by frame")

    # forward over to the frames you want to start reading from.
    # manually set this, fps * time in seconds you wanna start from
    video.set(1, 0)
    success, frame = video.read()
    #fps = int(video.get(cv2.CAP_PROP_FPS))  # this will return 0!
    fps = 16  # hardcode fps
    total_frame_count = int(video.get(cv2.CAP_PROP_FRAME_COUNT))
    print("Loading video %d seconds long with FPS %d and total frame count %d " % (total_frame_count/fps, fps, total_frame_count))

    count = 1
    while video.isOpened():
       success, frame = video.read()
       if not success:
           break

       if count % fps == 0:
           print("%dth frame is %d seconds on video"%(count, count/fps))
       count += 1

    The code will finish before it gets near the end of the video, since the video isn’t at a constant FPS.
    How can I convert a variable-FPS video to a constant FPS video ?

  • ffmpeg concatenate two videos, unexpectedly changes the first second(s) of 2nd video

    29 juin 2019, par Roy

    I used ffmpeg to concatenate two videos of my game play recordings. I wrote a list.txt file which lists the two files :

    list.txt:
    file 2019~06~28_~_Game_1_~_Part_2.mp4
    file 2019~06~28_~_Game_1_~_Part_3.mp4

    I then run ffmpeg to concat them :

    ffmpeg -safe 0 -f concat -i list.txt -c copy "output.mp4"

    However, the resulting video seems to be skipping frames (or going through them really quickly) at the first second(s) of the second video, causing the perception of the motion suddenly fast-forwarded.

    The two videos were recorded by the same game video recorder "GeForce Experience" in one game session. They should match smoothly when concatenated.

    Here is the output of ffmpeg :

    ffmpeg version 3.4.1 Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 7.2.0 (GCC)
     configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-bzlib --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-cuda --enable-cuvid --enable-d3d11va --enable-nvenc --enable-dxva2 --enable-avisynth --enable-libmfx
     libavutil      55. 78.100 / 55. 78.100
     libavcodec     57.107.100 / 57.107.100
     libavformat    57. 83.100 / 57. 83.100
     libavdevice    57. 10.100 / 57. 10.100
     libavfilter     6.107.100 /  6.107.100
     libswscale      4.  8.100 /  4.  8.100
     libswresample   2.  9.100 /  2.  9.100
     libpostproc    54.  7.100 / 54.  7.100
    [mov,mp4,m4a,3gp,3g2,mj2 @ 000001600bbdb5e0] Auto-inserting h264_mp4toannexb bitstream filter
    Input #0, concat, from 'list.txt':
     Duration: N/A, start: 0.000000, bitrate: 24674 kb/s
       Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, smpte170m/smpte170m/bt470m), 1920x1080 [SAR 1:1 DAR 16:9], 24479 kb/s, 59.69 fps, 60 tbr, 90k tbn, 120 tbc
       Metadata:
         creation_time   : 2019-06-29T04:43:18.000000Z
         handler_name    : VideoHandle
       Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 195 kb/s
       Metadata:
         creation_time   : 2019-06-29T04:43:18.000000Z
         handler_name    : SoundHandle
    File 'output.mp4' already exists. Overwrite ? [y/N] y
    Output #0, mp4, to 'output.mp4':
     Metadata:
       encoder         : Lavf57.83.100
       Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, smpte170m/smpte170m/bt470m), 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 24479 kb/s, 59.69 fps, 60 tbr, 90k tbn, 90k tbc
       Metadata:
         creation_time   : 2019-06-29T04:43:18.000000Z
         handler_name    : VideoHandle
       Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 195 kb/s
       Metadata:
         creation_time   : 2019-06-29T04:43:18.000000Z
         handler_name    : SoundHandle
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
     Stream #0:1 -> #0:1 (copy)
    Press [q] to stop, [?] for help
    [mov,mp4,m4a,3gp,3g2,mj2 @ 000001600bbdb5e0] Auto-inserting h264_mp4toannexb bitstream filter
    frame= 7405 fps=0.0 q=-1.0 Lsize=  221175kB time=00:02:03.63 bitrate=14655.4kbits/s speed= 157x
    video:218137kB audio:2862kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.079741%

    In particular, I don’t know what does "Auto-inserting h254_mp4toannexb bitstream filter" mean. Did this caused the unexpected change ?

  • FFmpeg C++ decoding in a separate thread

    12 juin 2019, par Brigapes

    I’m trying to decode a video with FFmpeg and convert it to an openGL texture and display it inside a cocos2dx engine. I’ve managed to do that and it displays the video as i wanted to, now the problem is performance wise. I get a Sprite update every frame(game is fixed 60fps, video is 30fps) so what i did was i decoded and converted frame interchangeably, didn’t work great, now i have it set up to have a separate thread where i decode in an infinite while loop with sleep() just so it doesn’t hog the cpu/program.
    What i currently have set up is 2 pbo framebuffers and a bool flag to tell my ffmpeg thread loop to decode another frame since i don’t know how to manually wait when to decode another frame. I’ve searched online for a soultion to this kind of problem but didn’t manage to get any answers.

    I’ve looked at this : Decoding video directly into a texture in separate thread but it didn’t solve my problem since it was just converting YUV to RGB inside opengl shaders which i haven’t done yet but currently not an issue.

    Additional info that might be useful is that i don’t need to end thread until application exit and i’m open to using any video format, including lossless.

    Ok so main decoding loop looks like this :

    //.. this is inside of a constructor / init
    //adding thread to array in order to save the thread    
    global::global_pending_futures.push_back(std::async(std::launch::async, [=] {
           while (true) {
               if (isPlaying) {
                   this->decodeLoop();
               }
               else {
                   std::this_thread::sleep_for(std::chrono::milliseconds(3));
               }
           }
       }));

    Reason why i use bool to check if frame was used is because main decoding function takes about 5ms to finish in debug and then should wait about 11 ms for it to display the frame, so i can’t know when the frame was displayed and i also don’t know how long did decoding take.

    Decode function :

    void video::decodeLoop() { //this should loop in a separate thread
       frameData* buff = nullptr;
       if (buf1.needsRefill) {
       /// buf1.bufferLock.lock();
           buff = &buf1;
           buf1.needsRefill = false;
           firstBuff = true;
       }
       else if (buf2.needsRefill) {
           ///buf2.bufferLock.lock();
           buff = &buf2;
           buf2.needsRefill = false;
           firstBuff = false;
       }

       if (buff == nullptr) {
           std::this_thread::sleep_for(std::chrono::milliseconds(1));
           return;//error? //wait?
       }

       //pack pixel buffer?

       if (getNextFrame(buff)) {
           getCurrentRBGConvertedFrame(buff);
       }
       else {
           loopedTimes++;
           if (loopedTimes >= repeatTimes) {
               stop();
           }
           else {
               restartVideoPlay(&buf1);//restart both
               restartVideoPlay(&buf2);
               if (getNextFrame(buff)) {
                   getCurrentRBGConvertedFrame(buff);
               }
           }
       }
    /// buff->bufferLock.unlock();

       return;
    }

    As you can tell i first check if buffer was used using bool needsRefill and then decode another frame.

    frameData struct :

       struct frameData {
           frameData() {};
           ~frameData() {};

           AVFrame* frame;
           AVPacket* pkt;
           unsigned char* pdata;
           bool needsRefill = true;
           std::string name = "";

           std::mutex bufferLock;

           ///unsigned int crrFrame
           GLuint pboid = 0;
       };

    And this is called every frame :

    void video::actualDraw() { //meant for cocos implementation
       if (this->isVisible()) {
           if (this->getOpacity() > 0) {
               if (isPlaying) {
                   if (loopedTimes >= repeatTimes) { //ignore -1 because comparing unsgined to signed
                       this->stop();
                   }
               }

               if (isPlaying) {
                   this->setVisible(true);

                   if (!display) { //skip frame
                       ///this->getNextFrame();
                       display = true;
                   }
                   else if (display) {
                       display = false;
                       auto buff = this->getData();                    
                       width = this->getWidth();
                       height = this->getHeight();
                       if (buff) {
                           if (buff->pdata) {

                               glBindBuffer(GL_PIXEL_UNPACK_BUFFER, buff->pboid);
                               glBufferData(GL_PIXEL_UNPACK_BUFFER, 3 * (width*height), buff->pdata, GL_DYNAMIC_DRAW);


                               glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, 0);///buff->pdata);                            glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
                           }

                           buff->needsRefill = true;
                       }
                   }
               }
               else { this->setVisible(false); }
           }
       }
    }

    getData func to tell which frambuffer it uses

    video::frameData* video::getData() {
       if (firstBuff) {
           if (buf1.needsRefill == false) {
               ///firstBuff = false;
               return &buf1;///.pdata;
           }
       }
       else { //if false
           if (buf2.needsRefill == false) {
               ///firstBuff = true;
               return &buf2;///.pdata;
           }
       }
       return nullptr;
    }

    I’m not sure what else to include i pasted whole code to pastebin.
    video.cpp : https://pastebin.com/cWGT6APn
    video.h https://pastebin.com/DswAXwXV

    To summarize the problem :

    How do i properly implement decoding in a separate thread / how do i optimize current code ?

    Currently video is lagging when some other thread or main thread gets heavy and then it does not decode fast enough.