Newest 'ffmpeg' Questions - Stack Overflow

http://stackoverflow.com/questions/tagged/ffmpeg

Les articles publiés sur le site

  • How to seek one frame forward in ffmpeg [closed]

    10 mars, par Summit

    i want to seek one frame forward when i call this function but it gets stuck on the first frame seeked and does not move forward.

    void seekFrameUp() {
        if (!fmt_ctx || video_stream_index == -1) return;
    
        AVRational frame_rate = fmt_ctx->streams[video_stream_index]->r_frame_rate;
        if (frame_rate.num == 0) return;  // Avoid division by zero
    
        // Compute frame duration in AV_TIME_BASE_Q
        int64_t frame_duration = av_rescale_q(1,
            av_make_q(frame_rate.den, frame_rate.num),
            AV_TIME_BASE_Q);
    
        int64_t next_pts = requestedTimestamp + frame_duration;
    
        qDebug() << "Seeking forward: " << next_pts
            << " (Current PTS: " << requestedTimestamp
            << ", Frame Duration: " << frame_duration << ")";
    
        requestFrameAt(next_pts);
    
        // Update the requested timestamp after seeking
        requestedTimestamp = next_pts;
    }
    
    
    
    
    
    void requestFrameAt(int64_t timestamp) {
         {
             std::lock_guard lock(mtx);
             decoding = true;  // Ensure the thread keeps decoding when needed
         }
         cv.notify_one();
     }
    
    
    void decodeLoop() {
        while (!stopThread) {
            std::unique_lock lock(mtx);
            cv.wait(lock, [this] { return decoding || stopThread; });
    
            if (stopThread) break;
    
            // Avoid redundant seeking
            if (requestedTimestamp == lastRequestedTimestamp) {
                decoding = false;
                continue;
            }
    
           
    
            lastRequestedTimestamp.store(requestedTimestamp.load());
            int64_t target_pts = av_rescale_q(requestedTimestamp, AV_TIME_BASE_Q, fmt_ctx->streams[video_stream_index]->time_base);
    
            target_pts = FFMAX(target_pts, 0); // Ensure it's not negative
    
            if (av_seek_frame(fmt_ctx, video_stream_index, target_pts, AVSEEK_FLAG_ANY) >= 0) {
                avcodec_flush_buffers(codec_ctx);  // Clear old frames from the decoder
                qDebug() << "Seek successful to PTS:" << target_pts;
            }
            else {
                qDebug() << "Seeking failed!";
                decoding = false;
                continue;
            }
    
            lock.unlock();
    
            // Keep decoding until we receive a valid frame
            bool frameDecoded = false;
            while (av_read_frame(fmt_ctx, pkt) >= 0) {
                if (pkt->stream_index == video_stream_index) {
                    if (avcodec_send_packet(codec_ctx, pkt) == 0) {
                        while (avcodec_receive_frame(codec_ctx, frame) == 0) {
                            qDebug() << "FRAME DECODED ++++++++++++ PTS:" << frame->pts;
                            if (frame->pts != AV_NOPTS_VALUE) {
                                // Rescale PTS to AV_TIME_BASE_Q
                                int64_t pts_in_correct_base = av_rescale_q(frame->pts,
                                    fmt_ctx->streams[video_stream_index]->time_base,
                                    AV_TIME_BASE_Q);
    
                                // Ensure we don’t reset to 0 incorrectly
                                if (pts_in_correct_base > 0) {
                                    current_pts.store(pts_in_correct_base);
                                    qDebug() << "Updated current_pts to:" << current_pts.load();
                                }
                                else {
                                    qDebug() << "Warning: Decoded frame has PTS <= 0, keeping last valid PTS.";
                                }
                            }
                            else {
                                qDebug() << "Invalid frame->pts (AV_NOPTS_VALUE)";
                            }
    
                            QImage img = convertFrameToImage(frame);
                            emit frameDecodedSignal(img);
                    
                            frameDecoded = true;
                            break;  // Exit after the first valid frame
                        }
    
                        if (frameDecoded) {
                            decoding = (requestedTimestamp != lastRequestedTimestamp);
                            break;
                        }
                    }
                }
                av_packet_unref(pkt);
            }
        }
    }
    
  • How to Use SVG Image Files Directly in FFmpeg ? [closed]

    10 mars, par Pubg Mobile

    I generated a bar chart race using the Flourish Studio website and captured the frames as a PDF sequence using a Python Playwright script. Then, I converted all the PDF files into SVG format using the following Python script, because SVG is the only image format that maintains quality without loss when zoomed in:

    import os
    import subprocess
    import multiprocessing
    
    # Define paths
    pdf2svg_path = r"E:\Desktop\dist-64bits\pdf2svg.exe"  # Full path to pdf2svg.exe
    input_dir = r"E:\Desktop\New folder (4)\New folder"
    output_dir = r"E:\Desktop\New folder (4)\New folder (2)"
    
    # Ensure output directory exists
    os.makedirs(output_dir, exist_ok=True)
    
    def convert_pdf_to_svg(pdf_file):
        """ Convert a single PDF file to SVG. """
        input_pdf = os.path.join(input_dir, pdf_file)
        output_svg = os.path.join(output_dir, os.path.splitext(pdf_file)[0] + ".svg")
    
        try:
            subprocess.run([pdf2svg_path, input_pdf, output_svg], check=True)
            print(f"Converted: {pdf_file} -> {output_svg}")
        except FileNotFoundError:
            print(f"Error: Could not find {pdf2svg_path}. Make sure the path is correct.")
        except subprocess.CalledProcessError:
            print(f"Error: Conversion failed for {pdf_file}")
    
    if __name__ == "__main__":
        # Get list of PDF files in input directory
        pdf_files = [f for f in os.listdir(input_dir) if f.lower().endswith(".pdf")]
    
        # Use multiprocessing to speed up conversion
        with multiprocessing.Pool(processes=multiprocessing.cpu_count()) as pool:
            pool.map(convert_pdf_to_svg, pdf_files)
    
        print("All conversions completed!")
    

    Problem:

    Now, I want to use these SVG images in FFmpeg to create a high-quality video file. However, FFmpeg does not support SVG files directly, and I have read that I must convert SVG files into PNG before using them in FFmpeg. The problem is that PNG images reduce quality, especially when zooming in, which is why I want to avoid converting to PNG.

    Is there any way to use SVG files directly in FFmpeg or another method to convert them into a high-quality video while maintaining full resolution? Any ideas or suggestions would be greatly appreciated!

  • OpenGL to FFMpeg encode [closed]

    10 mars, par Ian A McElhenny

    I have a opengl buffer that I need to forward directly to ffmpeg to do the nvenc based h264 encoding.

    My current way of doing this is glReadPixels to get the pixels out of the frame buffer and then passing that pointer into ffmpeg such that it can encode the frame into H264 packets for RTSP. However, this is bad because I have to copy bytes out of the GPU ram into CPU ram, to only copy them back into the GPU for encoding.

    How do you do bypass the need to copy to and from the CPU?

  • No such filter in ffmpeg

    10 mars, par sneha desai

    I am trying to create the slideshow with below command.

    Here is the command I have executed:

     ffmpeg
    -loop 1 -t 1 -i /sdcard/input0.png 
    -loop 1 -t 1 -i /sdcard/input1.png 
    -loop 1 -t 1 -i /sdcard/input2.png 
    -loop 1 -t 1 -i /sdcard/input3.png 
    -loop 1 -t 1 -i /sdcard/input4.png 
    -filter_complex 
    "[0:v]trim=duration=15,fade=t=out:st=14.5:d=0.5[v0]; 
     [1:v]trim=duration=15,fade=t=in:st=0:d=0.5,fade=t=out:st=14.5:d=0.5[v1]; 
     [2:v]trim=duration=15,fade=t=in:st=0:d=0.5,fade=t=out:st=14.5:d=0.5[v2]; 
     [3:v]trim=duration=15,fade=t=in:st=0:d=0.5,fade=t=out:st=14.5:d=0.5[v3]; 
     [4:v]trim=duration=15,fade=t=in:st=0:d=0.5,fade=t=out:st=14.5:d=0.5[v4]; 
     [v0][v1][v2][v3][v4]concat=n=5:v=1:a=0,format=yuv420p[v]" -map "[v]" /sdcard/out.mp4
    

    on execution of this command it gives error something like:

     onFailure: ffmpeg version n3.0.1 Copyright (c) 2000-2016 the FFmpeg developers
      built with gcc 4.8 (GCC)
      configuration: --target-os=linux --cross-prefix=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/bin/arm-linux-androideabi- --arch=arm --cpu=cortex-a8 --enable-runtime-cpudetect --sysroot=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/sysroot --enable-pic --enable-libx264 --enable-libass --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-fontconfig --enable-pthreads --disable-debug --disable-ffserver --enable-version3 --enable-hardcoded-tables --disable-ffplay --disable-ffprobe --enable-gpl --enable-yasm --disable-doc --disable-shared --enable-static --pkg-config=/home/vagrant/SourceCode/ffmpeg-android/ffmpeg-pkg-config --prefix=/home/vagrant/SourceCode/ffmpeg-android/build/armeabi-v7a --extra-cflags='-I/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/include -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fno-strict-overflow -fstack-protector-all' --extra-ldflags='-L/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/lib -Wl,-z,relro -Wl,-z,now -pie' --extra-libs='-lpng -lexpat -lm' --extra-cxxflags=
      libavutil      55. 17.103 / 55. 17.103
      libavcodec     57. 24.102 / 57. 24.102
      libavformat    57. 25.100 / 57. 25.100
      libavdevice    57.  0.101 / 57.  0.101
      libavfilter     6. 31.100 /  6. 31.100
      libswscale      4.  0.100 /  4.  0.100
      libswresample   2.  0.101 /  2.  0.101
      libpostproc    54.  0.100 / 54.  0.100
    [mjpeg @ 0x4362af10] Changing bps to 8
    Input #0, image2, from '/sdcard/img0001.jpg':
      Duration: 00:00:00.04, start: 0.000000, bitrate: 2410 kb/s
        Stream #0:0: Video: mjpeg, yuvj444p(pc, bt470bg/unknown/unknown), 259x194 [SAR 1:1 DAR 259:194], 25 fps, 25 tbr, 25 tbn, 25 tbc
    [mjpeg @ 0x436300a0] Changing bps to 8
    Input #1, image2, from '/sdcard/img0002.jpg':
      Duration: 00:00:00.04, start: 0.000000, bitrate: 2053 kb/s
        Stream #1:0: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 290x174 [SAR 1:1 DAR 5:3], 25 fps, 25 tbr, 25 tbn, 25 tbc
    [mjpeg @ 0x436383a0] Changing bps to 8
    Input #2, image2, from '/sdcard/img0003.jpg':
      Duration: 00:00:00.04, start: 0.000000, bitrate: 3791 kb/s
        Stream #2:0: Video: mjpeg, yuvj444p(pc, bt470bg/unknown/unknown), 300x168 [SAR 1:1 DAR 25:14], 25 fps, 25 tbr, 25 tbn, 25 tbc
    [mjpeg @ 0x43648f50] Changing bps to 8
    Input #3, image2, from '/sdcard/img0004.jpg':
      Duration: 00:00:00.04, start: 0.000000, bitrate: 1796 kb/s
        Stream #3:0: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 259x194 [SAR 1:1 DAR 259:194], 25 fps, 25 tbr, 25 tbn, 25 tbc
    [mjpeg @ 0x437b4070] Changing bps to 8
    Input #4, image2, from '/sdcard/img0005.jpg':
      Duration: 00:00:00.04, start: 0.000000, bitrate: 1083 kb/s
        Stream #4:0: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 212x160 [SAR 1:1 DAR 53:40], 25 fps, 25 tbr, 25 tbn, 25 tbc
    [AVFilterGraph @ 0x4393c960] No such filter: '"'
    Error initializing complex filters.
    Invalid argument
    

    and i used this demo https://github.com/WritingMinds/ffmpeg-android-java

  • ffmpeg arguments for youtube upload [duplicate]

    9 mars, par Hoopes

    I'm currently having an issue in my ios app where youtube sharing of a video created by ffmpeg seems broken. I am starting with an audio file, and an image that I will use as the static background of the video.

    ffmpeg -y 
    -i "/path/to/audio.m4a" 
    -r 24 
    -i "/path/to/background.png" 
    -codec:a aac_at 
    -codec:v h264_videotoolbox 
    -brand mp42 
    -vf "scale=1080:1080" 
    -vf "scale=out_color_matrix=bt709" 
    -color_primaries bt709 
    -color_trc bt709 
    -colorspace bt709 
    -movflags +faststart 
    "output.mp4"
    

    Note that since i'm on an ios device, i am using h264_videotoolbox, which uses the apple hardware to help encode it.

    When I use the regular ios share screen, and share to something like slack, the video seems to work fine. However, when I try to share to youtube via the same share screen, it looks like it can't detect the duration of the video:

    Youtube detecting zero duration

    Another thing to note is that when i download the file, and upload it to youtube via regular web browser, it seems to be fine - which somewhat points to it not being ffmpeg related, but it shares fine to other apps.

    I'm using ffmpeg version 5.1.2